We’ve talked several times - see, e.g.,
TiB 71 and
TiB 119 - about the growing scale of machine learning models and the associated increases in the cost of training them. What’s the impact? A
new paper argues that it’s made AI more elitist: starting from the deep learning revolution, papers accepted at top machine learning conferences are increasingly dominated by researchers from the largest companies and top-ranked universities (interestingly, this doesn’t appear to be true in non-ML computer science disciplines).
Nevertheless, the conclusion doesn’t quite ring true. As Gwern points out
here, it’s hard to look at the last decade of machine learning and not see an extraordinary opening up of the “state of the art” to individual researchers. True, they cannot pay tens of millions of dollars to train models with billions of parameters, but the combination of open research norms (
Arxiv is a minor miracle), open source tooling and infrastructure, and so much public discussion and education by the field’s luminaries are impressive resources for democratisation.
There’s another dimension here worth pondering: the amounts spend by organisations like OpenAI or DeepMind are enormous relative to the resources of individuals, but they’re rounding errors compared to budgets of nation states. OpenAI raised $1bn to much fanfare - but this is a couple of percentage points of the UK’s defence budget. If you share the view,
as many smart people do, that AI is a geopolitically important technology, it’s really very inexpensive. Any medium sized power can afford the cost of a world-class research lab. Talent, not cash, is the bottleneck today.