I know by internet standards I’m very late to the party, but if you haven’t read Ai 2027(Alexander et al.)1 yet I highly recommend you do so now. Alexander et al.’s narrative telling of how this next decade could go gave me chills in the way I imagine a millenarian feels while reading The Book of Revelation.
Yet it also helped clarify my thinking about this new era of AI. Specifically, the possibility and implications of an intelligence explosion—as a total normie with no expertise beyond a few AI classes at Stanford.
Intelligence explosion
The gist of Alexander et al. is that AI researchers are able to create super-AI researchers, who then create super-super-AI researchers, and so on. Since they can be spun up in parallel, run 24/7, operate far faster than the human brain, and share information perfectly, they rapidly overcome all obstacles between us and Superintelligence. This process is accelerated by an arms race with China where the government fully integrates these agents into the economy the moment it is technically feasible.2 Since the AIs are in the economy, they’re able to acquire real-world experience and fill in any data gaps almost instantly.
As a result, everything that is theoretically solvable gets its timeline pulled forward. In the write-up, they call this phenomenon “speedup.” So if a drug would normally take 10 years to develop, in a world with a 2x speedup, it would only take five. Their actual estimates are far higher—predicting a 75x speedup by the end of 2027 and over 375,000x by 2030, even if we “slow down.”
In other words, they’re predicting a true intelligence explosion—where problems that would normally take an entire lifetime to solve are handled in a single year by 2027.
A very close friend of mine, a Stanford AI researcher, privately put their probability of something like this occurring in the next few years at around 5–10%. Who am I to disagree with them?
Because in some ways, we’re already on this exponential—or at least have been since roughly the 16th century in Europe. Borrowing, and abusing, an analogy Scott used in his interview with Dwarkesh Patel.
How long did it take for the British to know more about the plants and animals of Australia than all of the collective knowledge of the Aboriginal Australians? If the settlement of Australia by the Anglo began in 1788, would say 50 years (1838) be a plausible estimate? If not, what about 100 (1888)? Given the Aboriginals have been there for approximately 50,000 years, we are already looking at a plausible research multiplier in the 19th century of a thousand relative to a Stone Age society.
In many ways what Alexander et al. is predicting is an acceleration of an already existent trend and one I see happening irrespectively. Obviously for me, Peter Banks, it is relevant if the AI timelines we are talking about is 2 years or 200, but either way it is on the horizon and I fully expect it to arrive eventually.
To give an impression of how I think the distribution of outcomes looks, here’s how I would assign probabilities of speedups by the end of 2030:
1000x – 20%
1000–501x – 5%
500–101x – 1%
100–51x – 1%
50–26x – 3%
25–11x – 5%
10–6x – 5%
5–1x – 50%
<1x – 10%
As you can see, I don’t assign a zero probability to transformational change—in fact, I estimate a 25% chance that Alexander et al. are directionally correct and that our society will be radically transformed in less than five years. Still, I believe the bulk of the distribution lies closer to a substantial acceleration of a half-millennium-long trend.
Why? Because I worry we’ll run into one of two “problems”: first, model collapse; second, regulatory strangulation.
Model Collapse
Model collapse is a phenomenon in AI research where the AI, in short, becomes inbred on AI-generated data and increasingly corrupted.
To get what I mean, I’m going to—mildly—dork out on AI for a second: you can basically think of all unsupervised learning as compressing information down into a minimal representation, where loss and size are traded off against each other.
Take LLMs, for example—which seem to be the foundation Alexander et al. expect for the creation of ASI. What these models are functionally doing is learning the underlying distribution of human text in order to replicate it. If you’ve ever heard people say things like, “they’re just next-token predictors”, this is what they’re referring to.
The whole process is remarkably simple, but it has enormous returns to scale. It turns out these models can develop a very sophisticated representation of the “world” just by compressing our text. However, when you train on AI-generated data, you’re essentially getting a photocopy of the true distribution—and like all photocopies, information is lost. Repeat the process enough times and you’re left with something that only vaguely resembles the original data.
At this point, we’ve mostly run out of new text. Large companies can, of course, still pay to create high-quality training material, and humans are constantly generating new content (OpenAI, can I get a check?), but the era of vast, untapped data vaults is over. People seem less concerned about this problem than they used to be, but I sincerely believe it will become more acute.
If for no other reason than this: there’s a “soft” version of model collapse where we keep building bigger, more robust models—eventually capable of perfectly replicating the world as humans have represented it in text—but that representation is itself flawed or limited in important ways.
Regulatory
Let’s say the intelligence explosion is possible and there’s no risk of model collapse. It still seems very likely that we would intentionally slow AI development—simply because it’s terrifying. We already constrain economic development for sillier reasons than fear of building a machine demon, and long periods of human history have been marked by the active suppression of technological advancement (e.g., the trial of Galileo). The largest barrier to the AI revolution likely won’t be technical—it will be an astronomical level of human resistance.
The tech behind GPUs is so advanced, and the energy demands of large data centers so massive, that it would be trivial to bottleneck progress by simply banning further development. After all, we’ve mostly succeeded in banning human genetic modification, and that process is much cheaper and easier to hide. It’s worth remembering that, at least for now, the CEOs and scientists behind AI research are just regular humans in fleshsuits—they fear the hammer of the state as much as any other Patrician.
I sincerely think this is the dynamic most CS people I know tend to miss. There’s a chasm between “breaking the law” by scraping YouTube or Reddit data without permission and doing something that invites real, physical reprisal. The world—for now—is still dominated by atoms, not bits.
This is all without even mentioning the energy bottleneck. We may simply fail to build enough power plants to operate the servers required to pay the inference costs of ASI. But I think the Chinese are more than capable of solving this problem.
As something of an aside, this is why Alexander et al. rely on the competition with China. They need a reason for us to hand over responsibility to AI quickly—otherwise, the 2027 timeline is impossible. This must be avoided.
I’m pretty openly a nationalist. I think it’s the duty of all people to support their nation—if not their state. However, the treatment is the dosage. Nationalism easily becomes poison if it isn’t cut with other virtues. The only way the government could convince us to hand over our agency as a species is by obscuring it behind patriotism. We simply cannot repeat the crises of the 20th century—with its mob insanity and mad dictatorships. That means avoiding World War III, and the hysteria it would unleash, must be our highest social priority. Everything else is downstream of that.
Wrapping Up
Even if technological progress in AI ground to a halt today, what already exists will still profoundly alter millions of lives by automating jobs out of existence. So far its unclear if this is happening already.
Though anyone claiming to fully understand the macroeconomy is trying to sell you something.
The tools that already exist for text generation and processing—setting aside advances in voice acting and image generation—are frankly absurd. Right now in the Bay Area, hundreds of male friend groups are wiring Chatty into Python wrappers and installing it into every corner of their lives.
So in summary:
Please consider becoming a paid subscriber. For $5/month or $30/year, you'll gain access to all my paywalled articles and earn a permanent spot in my heart.
By Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
This is a major problem I have with their prediction—tech people tend to underestimate the complexity and inertia of human social systems.
I think people often greatly underestimate the physical bottleneck. Whether that be the immediate compute/energy problems, or the down the line logistics of creating a robo army, there is limited stuff. Scarcity always comes.
And until the robo army emerges, humanity will always have a trump card on AI.
All that being said AI-2027 was a fantastically interesting read.
“chills in the way I imagine a millenarian feels while reading The Book of Revelations.”
This is the most common mistake out there—it’s the book of Revelation, singular, there’s only one Revelation to John on Patmos!