Thoughts in Between
TiB 134: "Technological sovereignty"; why we dream; the missing piece in R&D; and more...
Welcome new readers! Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
The rise and rise of technological sovereignty
It’s been an extraordinary month in the story of technological sovereignty, an idea we’ve discussed several times before (e.g. TiB 12, TiB 90, TiB 115), First, we had the increasingly peculiar TikTok saga, last covered in TiB 124 and TiB 126, whose resolution - a non-acquisition by Oracle and, err, Walmart - is, in Ben Thompson’s words, the worst possible outcome (paywalled, but worth it). Certainly, it doesn’t seem to have won the US any of the things national security hawks were concerned about.
Then we have NVIDIA’s acquisition of Arm, which would create, in NVIDIA’s framing, "the premier computing company for the age of artificial intelligence”. There’s a good summary here. It’s prompted some (justified) hand wringing in the UK about one of the few genuinely dominant British tech companies falling into foreign ownership for the second time in four years. And, unsurprisingly, there's anxiety in China about the US’s ability to use the deal to further block China's access to technology.
Third, and relatedly, the US announced restrictions on exports to SMIC, China’s largest semiconductor manufacturer. This will only fuel China’s own efforts to achieve technological sovereignty across a range of domains (See also this CSET report on the competition for semiconductor talent). How much does this matter? Aishwarya Nagarajan has a great, self-confessedly semi-conspiratorial take that suggests that it’s a matter of life and death. Technological sovereignty is likely to be among the key geopolitical questions of the next decade. It’s surprising how little mainstream attention it gets.
The missing piece of the strategic R&D puzzle?
Adam Marblestone and Samuel Rodriques have an interesting new proposal for what they call “Focused Research Organisations” (FROs). They argue that there’s a missing piece in the strategic R&D jigsaw that FROs fill. The core idea is that some advances in strategically important areas of science and technology - the authors suggest brain mapping, novel antibiotic technology and nano fabrication, among others, as candidates - fall between the cracks of existing institutions.
Why? According to Marblestones and Rodriques, these require more coordinated building and systematic teamwork than academia permits, but their benefits are not immediately monetisable in the way that a startup or corporate business model would require. Crucially, the argument goes, organisations like (D)ARPA (discussed in TiB 21, TiB 100 and TiB 126) can’t achieve this alone, because they focus on the funding, rather than delivery, side of the equation. The proposed solution is independent organisations focused on a single problem with a well-defined and time-bound goal.
I like this idea. In my day job, I often come across ideas that struggle to cross the “not quite science, not quite a startup” chasm. The biggest challenge I foresee is talent attraction and retention (There’s a good thread on this here). If the opportunity cost for many target employees is working in a startup, rather than academia, such people are often happy to work for (relatively) low salaries, but like to have a way to share in the upside. If there were a way to engineer that, I think it could be a game changer.
Why do we dream? Lessons from machine learning...
Why do we dream? In a remarkable new paper neuroscientist Erik Hoel uses a machine learning metaphor to offer an audacious hypothesis: dreams exist to prevent our brains “overfitting” to the experiences of our everyday lives. Overfitting is a classic problem in building statistical models. You train the model on sample data (the training set) but you want the model to be able to make predictions based on data outside the training set. The trade off is that, often, the better your model describes the training set, the worse it deals with out-of-sample data. It has been "overfitted".
Hoel hypothesises that the brain faces the same trade off. It “trains” on the limited sample that is real life and so risks being less generalisable to new situations. In machine learning, one way we avoid overfitting is to deliberately inject randomness or noise into the input data. Hoel suggests that dreams may be the brain’s equivalent. He says this helps explain the sparse, hallucinatory and narrative characteristic of dreams.
I’m not qualified to evaluate the science, but it’s a great paper that leads you down some fascinating rabbit holes (see, for example, this Wired profile of Karl Friston, whose ideas about the brain Hoel draws on). It’s also a striking example of the power of metaphor in science and, perhaps, of how our understanding of biological and artificial intelligences may be mutually illuminating in the years ahead.
Quick links
- The devil makes work, etc? Striking survey research on sex and the pandemic.
- Nativism strikes again! What happens to jobs for locals when immigrants leave? (Related: interesting findings on the inter-generational social mobility of immigrants)
- Next, add one measure of blue crab blood. The extraordinary story of Atlantic horseshoe crabs and the COVID vaccine (see this pic)
- Note-able landmarks. Which town is home to all the bridges depicted on the Euro banknotes?
- All the data fit to sell. Superb piece on how to make data more valuable (Follow up to the excellent "Data-as-a-service bible" that was one of the most popular links on TiB last year)
Your feedback
I'm always amazed by how many new Thoughts in Between readers come from posts on Twitter and LinkedIn. If you enjoy it, you can help by sharing the URL there. Or just forward this email to a friend who might like it.
Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary.
It's always great to hear from readers. Feel free to reply directly or message me on Twitter (though I'm on paternity leave the rest of this week, so may be slower than usual).
Until next week,
Matt Clifford