The TiB podcast episode with Marc Warner
on AI safety is now live. Marc is co-founder and CEO of Faculty AI
and one of the deepest thinkers on artificial intelligence I know. We discuss how to make AI “safe” - which in Marc’s framing means fair
. As Marc points out, most new technologies come with safety concerns, but over time we address these while simultaneously improving performance. In other words, it’s a mistake to worry about a safety/performance trade-off: since the Model T, cars have become faster and
safer. AI can be the same.
One of the most important models I’ve learned from Marc is distinguishing between “AI problems” and “political problems”. It’s tempting to look at episodes like the UK’s 2020 “algorithm-graded” exams fiasco
and see an “AI problem” - but as Marc points out, the issue here is ambiguity about what society means
by “fair”, not the algorithm used to determine it:
“A useful thought experiment to substitute ‘done by AI’ to ‘done by bureaucrats’, 'done by software’, 'done by computers’ or 'done by the Wizard of Oz’… If something doesn’t feel legitimate in any of those circumstances, it’s really got nothing to do with [AI]”
This is an argument dating back to Weber
: we can use technology, from bureaucracy to AI, to optimise our instrumental
rationality, but only the political process can provide the values for which we’re trying to optimise. Marc has some important ideas about how we might incorporate this into our machine learning models.
We explore much more in the conversation, including:
- The talent needed to implement AI
- The use of data science in the fight against COVID
- Why “explainability” of machine learning models is so important…
… and more. I hope you enjoy the conversation.