Thoughts in Between

by Matt Clifford

Matt's Thoughts In Between - Issue #63

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

Can a team have too much talent?

TiB reader Adam Ketterer shared with me this excellent piece (based on this academic paper) on the question of talent in teams. The core claim is that in activities where little intra-team coordination is required (e.g. trading, baseball), more talent is always better. However, in activities where a high level of coordination is key (e.g. basketball), team performance actually starts to drop off once you have too many talented individuals on the team.

Where do startups fit on this spectrum? They certainly require high levels of coordination, but the most successful startups seem to have enjoyed extraordinary concentrations of talent (Admittedly, there’s a degree of survivorship bias there). My hypothesis is that startup culture (flat, non-hierarchical, informal, equity remuneration, mission-focus, fast decision making, etc) has evolved partly as the optimal mechanism for coordinating high-ego, high-talent individuals for as long as possible.

There’s another way, too, that high performing peers can be a mixed blessing. I also stumbled across this paper (more here), which is based on a randomised control trial in Peruvian schools. It suggests that low performing students who were assigned dorms with high performing students actually saw their performance drop(!), perhaps because of self-esteem effects. Building talent communities - and especially talent monopolies - is a complex and fragile task. 

Realistic dystopias: autonomous weapons edition

I’ve talked before on TiB about AI risk. Even if you don’t think that Superintelligence - i.e. AI surpassing human intelligence and destroying us - is a realistic risk, there are a lot more imminent AI dystopias to worry about. This week’s After On podcast (a superb series that I recommend) interviews AI professor Stuart Russell who does an excellent and measured job of laying out some of the scenarios.

The most disturbing is what Russell calls “Slaughterbots”, about which he has made a short (8 min) and chilling YouTube film. The idea is that we are now capable of building swarms of tiny, autonomous, lethal drones that could be unleashed on a city and take out targets - from one individual to thousands - with perfect precision. Russell calls these “scalable weapons of mass destruction” - but unlike their nuclear counterparts, there is no Mutually Assured Destruction (or even proof of attribution) to hold back their use. And the upfront capital cost is far lower than for traditional WMD. 

The campaign to ban autonomous weapons, backed by many luminaries, has hit class Realpolitik buffers in the shape of Russian opposition . But that appeared to change this week, with a speech from a major Russian politician pushing in the opposite direction. This is, perhaps, an unexpected benefit of the growing US/China dominance in AI: it makes everyone else want to play ball. 

Why do startups do bad things?

My friend Alex has a post on why successful technology companies so often seem to have done “bad” things along the way - like Uber, Facebook, Airbnb, etc. Alex’s answer is both subtle and important. In conditions of competition, many startups will be tempted to adopt a “reckless” strategy (e.g. some of Uber’s anti-regulatory practices); the chances are high that any given reckless strategy will kill you, but for the category as a whole, the chances that the winner adopted a reckless strategy approach 1. In Alex’s words:

A reckless startup will almost always win, it’s just hard to know in advance which

One implication is that critiques of winning tech companies for their practices are using making a category error. You can wish Uber were nicer, but then Uber would (very likely) not be the company worth criticising. If you want a nicer winner, you have to change the system, not firm-level behaviour.

I’m interested in the limits of this argument. According to Alex’s framework, fields where a winner was able to develop an unassailable lead before competition set in should be less susceptible to the effect. Arguably this is what happened with DeepMind in AI. It strikes me that there’s a potential path dependence where the future of AI is profoundly shaped by the values of DeepMind’s founders. Or is this naive? Is the problem just postponed until the next race-to-the-bottom - indeed, autonomous weapons perhaps? 

Quick Links

  1. Unwritten rules. Great Twitter Q&A on the weirdest parts of the British constitution.
  2. Got to have a side hustle. The surprising story of what billionaire Charlie Munger does outside work.
  3. A riddle wrapped in a mystery. Amazing thread on the true story of the Enigma machine.
  4. Booms and busts. Remarkable photo-thread of the ups and downs of the Chinese startup scene.
  5. Rising tide. Great graphic on India's emerging middle class.

Your feedback

Thanks for reading. Last week we added more readers than ever before in a one-week period. I'd love you to forward this to someone who might enjoy it. And feel free to hit Reply if you have any comments.

Until next week,

Matt