In TiB 152
I argued that policymakers should worry a lot about national computational power, as this becomes an increasingly important ingredient in developing and deploying machine learning capabilities. This week AI policy expert Jack Clark
gave an excellent talk at Stanford (slides here
and see also this Twitter thread
) on this topic, in which he recommends that governments create a “National Research Cloud” (NRC) to level the playing field between increasingly dominant private actors and academia.
The challenge is clear: as we’ve discussed before (see, e.g., TiB 128
), machine learning models are becoming ever more computationally intensive and expensive. This piece
notes that even a small Google AI project has a training budget over $1.5m - well outside academic budgets. Jack argues this matters a lot. OpenAI (his former employer) originally deferred releasing its language model on the basis that it could be used to mass produce misinformation (see TiB 52
for more). If academics and others can’t replicate or scrutinise cutting edge work in the field, there’s a risk of a democratic deficit.
The goal of an NRC would be to reduce this “compute asymmetry” by making large-scale computational resource easy and cheap for scholars to access. As Jack notes, this should be affordable for any rich country because an NRC can “piggyback” on the extraordinary amount of capital investment
by Amazon, Microsoft et al in cloud computing. As I’ve said before, the costs involved in AI today are small by nation state standards - and an NRC looks like a good starting point for any country that wants to balance world-leading capabilities and democratic accountability.