Matt's Thoughts In Between

By Matt’s Thoughts in Between

TiB 202: Conscious AIs; future-proof ethics; outliers in science and more...

#202・
225

issues

Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Matt's Thoughts In Between will receive your email address.

Matt’s Thoughts in Between
Matt’s Thoughts in Between
This week: Comparing science funding mechanisms; how to avoid being condemned by history; on the consciousness of neural networks; and more…

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.
It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.
Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
How to get more outlier outcomes in science
We’ve talked a lot about new approaches to funding science (see e.g. TiB 100TiB 147153158174), etc. Michael Nielsen and Kanjun Qiu have a new post on this topic, “The trouble in comparing different approaches to science funding”. It’s probably the best thing I’ve read on the topic in the three years I’ve been writing about it; if you’re remotely interested in the problem, it’s a must read. The piece focuses on a crucial question that’s often ignored: how much do we care about increasing the probability of generating outlier results versus improving the average quality of research? The two are quite different and the answer has big implications for what sort of funding models we try and how to measure their success.
We’ve discussed outliers and the power law a lot before, both in the context of VC (Jerry Neumann’s classic piece) and of science (see TiB 127 and 160). Michael and Kanjun take this thinking a step further and point out some non-obvious implications of seeking more outlier outcomes in science. First, “today’s outliers may be an extremely misleading guide to tomorrow’s”, which makes it very challenging to use track record-based criteria to decide who to fund. Second, “the intervention used may shape who applies, and what they apply with, in crucial ways”, which in turn shapes the distribution of outcomes, but again makes predictions very hard.
One conclusion from the piece is that we need genuine pluralism in science funders’ objectives and approaches. As the authors argue, “more stringest peer review” or the UK’s REF may be exactly the right way to improve the median outcome, but it’s almost certainly the wrong approach if your goal is to increase variance or discover more outliers. If these are your goals - as they might be for something like the UK’s ARIA (on which recent good news) - you might need to try radical mechanisms like “tenure insurance” or “Long-Shot Prizes”. There’s much more in the piece - do read the whole thing.
How to avoid your grandchildren hating you
We talked in TiB 184 about the possibility that our descendants - perhaps even our grandchildren - will regard some of our actions are morally abhorrent. Holden Karnosfy (see TiB 174 for previous coverage) has an excellent post on how to avoid this, which he calls “future proof ethics”. Karnofsy asks what such a moral system might look like. He proposes three principles that he thinks are likely components: “systemisation”; “thin utilitarianism”; and “sentientism”.
Systemisation means simply the idea that a moral system is more likely to survive the test of time if it is rooted in extrapolating judgements from a number of fundamental principles, rather than from case-by-case intuitions. By “thin utilitarianism”, Karnofsky means a stripped-down version of utilitarianism that focuses on the greatest good for the greatest numberof “ethically relevant” beings, but without some of the baggage that is sometimes associated with that theory. Sentientism is the idea that any being that can suffer is one of these ethically relevant beings.
Of course, taking sentientism seriously means putting significant moral weight on the suffering of a vast number of beings to which we don’t pay much moral attention today, from factory farmed animals to insects (we talked about this back in TiB 56). Peter Singer has called this “expanding the moral circle”. Does this feel uncomfortable? Karnofsky shares your intuition: it is, of course, very weird! But Karnofsky argues that any future-proof more system will seem weird (just as the moral systems of the past do). I’ve talked before about the importance of not dismissing the weird out of hand; this is perhaps a prime example.
Are large neural networks "slightly conscious"?
Speaking of expanding the moral circle… Ilya Sutskever, Chief Scientist of OpenAI (see previous coverage) caused a minor stir in AI circles this week with this tweet:
Ilya Sutskever
it may be that today's large neural networks are slightly conscious
It prompted a lot of discussion (take a look at the replies), including a rather dismissive response from Yan Le Cun, a major figure in the development of modern AI and Facebook’s Chief AI scientist (see also this thread).
Even if you share Le Cun’s skepticism, it feels as though the question of what it would take for us to consider an artificial intelligence conscious is an important one. Yes, it takes us back to a very weird place - see above! - but if we’re about to embark on creating untold billions of artificial minds, the question of whether they can suffer is of profound moral importance. Fortunately, this is a topic that many smart people have thought a lot about. This thread is a gentle starting point, but it’s a subject that can get quite strange and very involved quite quickly. If you’re interested, I recommend this recent piece by Ali Ladak on how to evaluate sentience in artificial entities and Luke Muehlhauser’s excellent report for Open Philanthropy on the topic from 2017.
Ok, so are large neural networks slightly conscious? Well, if we replace the word “conscious” with “sentient” (here’s a good argument for why) and use the frameworks Ladak discusses, the answer seems to be no… for now. But it’s not at all impossible to think that that answer could change quite quickly.
Quick links
  1. Eugenics for cephalapods. Selectively breeding super-octopuses? (and, from the same author, an excellent curriculum on AI safety)
  2. Whence the replication crisis? Striking chart on the power of requiring researchers to pre-register hypotheses.
  3. High time. The civilisational impact of… drinking tea.
  4. “6-12 months downstream of memes on Twitter”. Fun - and likely accurate! - guide to predicting future trends.
  5. I’d strafe that boson in a femtosecond with a howitzer. What words are men more likely to know than women (and vice versa)?
Your feedback
Thanks for reading all the way to the end. If you like, Thoughts in Between, I’d love you to forward it to a friendor share the link.
And do feel free to hit reply if you have comments, questions or suggestions.
Until next week,
Matt Clifford
PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.
Did you enjoy this issue? Yes No
Matt’s Thoughts in Between
Matt’s Thoughts in Between @matthewclifford

Matt Clifford's Thoughts In Between

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.