I missed
this story from a few weeks ago about the OECD’s efforts (in collaboration with, among others, Jack Clark, whose
Import AI newsletter is a must read) to create an index of the computational power (“compute”) available to different nation states. The goal is to help policymakers make investment decisions relating to their national AI strategies - which at least 80 countries now have, according to the article.
The strategic significance of compute, which we discussed in
TiB 128, remains underrated outside tech circles.
Its importance rests on an idea known as the “scaling hypothesis”: this holds that what’s required to achieve
artificial general intelligence is not a major theoretical breakthrough, but rather scaling our current machine learning models by orders of magnitude. Some recent AI achievements (e.g.
GPT-3) provide some evidence for this. And one key to scale in AI is much, much more compute. Gwern has the
definitive write up of the scaling hypothesis, which I highly recommend.
The upshot is that in a world of
AI nationalism, compute is a key strategic resource (see also my
podcast conversation with Ian Hogarth). We’re currently in a strange transition period where machine learning models are expensive to train relative to the budget of a startup (e.g. DeepMind’s AlphaStar model reportedly cost $26m to train), but small relative to those of nation states. But in a world where a single model
could soon cost a billion dollars to train, policymakers will have to think carefully about how to build or access the resources they need to compete.