18 Comments
User's avatar
Faustian Crusader's avatar

I think people often greatly underestimate the physical bottleneck. Whether that be the immediate compute/energy problems, or the down the line logistics of creating a robo army, there is limited stuff. Scarcity always comes.

And until the robo army emerges, humanity will always have a trump card on AI.

All that being said AI-2027 was a fantastically interesting read.

Expand full comment
Peter Banks's avatar

Ultimately it’s the physical world that actually “exists” and who controls that is the most important part

Expand full comment
Soarin' Søren Kierkegaard's avatar

“chills in the way I imagine a millenarian feels while reading The Book of Revelations.”

This is the most common mistake out there—it’s the book of Revelation, singular, there’s only one Revelation to John on Patmos!

Expand full comment
Peter Banks's avatar

Haha thanks for catching that! I’ll fix it :)

Expand full comment
Soarin' Søren Kierkegaard's avatar

Proofreading for minor, trivial errors is my greatest power.

Expand full comment
Peter Banks's avatar

it is my greatest weakness so keep it coming!

Expand full comment
Brendon's avatar

I do not have high confidence we’ll be able to beat China in the AI race. We’ve lost the lead on a number of key areas, power generation being chief among them, and they can easily steal our IP - which is our biggest lead rn. The Elon and Trump mommy daddy break up doesn’t bode well either.

Expand full comment
Peter Banks's avatar

“What if we kick out all the scientists?”

Expand full comment
Brendon's avatar

“Model collapse is a phenomenon in AI research where the AI, in short, becomes inbred on AI-generated data and increasingly corrupted.”

I think this is just the tendency for all things to experience entropy. If not consistently fought against, the universe will berate all things into nothingness. Even our shallow AI reflections. Perhaps even more so, at least for now.

Expand full comment
Evan Brizius's avatar

Fantastic article Peter, just discovered you today and insta-subscribed. The one piece I am most confused by is why the general public hasn't reacted more negatively yet. The leaders of frontier labs openly admit their next goal is building an agent that can replace a remote worker, which is the current job of a not insignificant portion of the workforce and the dream job of like half the people I know. Maybe that just hasn't penetrated the mass media and public discourse yet. Or perhaps the average person cares less about their livelihood than I thought, who knows.

The real existential risks that Scott et al are focused on are of course even more serious, but it's unsurprising to me that those still seem like sci-fi to most and are difficult for non-AI pilled people to take seriously.

Expand full comment
Peter Banks's avatar

Thanks for reading and sharing your thoughts!

Expand full comment
Alex Potts's avatar

"The biggest barrier to the AI revolution won't be technical - it will br enormous human resistance."

I think this is what the AI2027 guys are trying to encourage. It's a warning, not a prediction.

Expand full comment
Peter Banks's avatar

What did you think about their scenario?

Expand full comment
Alex Potts's avatar

I've only watched a half-hour condensed version on YouTube rather than read the whole thing. I thought it sounded scarily plausible.

And even if it's only a 1% risk, that's still terrifying. Would you get on a plane that had a 1% risk of crashing?

Expand full comment
Brendon's avatar

“If for no other reason than this: there’s a “soft” version of model collapse where we keep building bigger, more robust models—eventually capable of perfectly replicating the world as humans have represented it in text—but that representation is itself flawed or limited in important ways.”

We’ve got multi-modal systems now, they aren’t limited to just text. Vision based models in particular are going to go gang busters. That’s how Tesla does autopilot and how figma and Amazon are experimenting (right now!!!) with autonomous AI driven humanoid robots

Expand full comment
Peter Banks's avatar

I honestly have no idea how the multi-modal stuff works from a design perspective. Should look into it.

Expand full comment
Leif Kent's avatar

You've omitted the most plausible objection to the AI 2027 narrative, which is that some features of the current generation of model architectures are fundamental bottlenecks to AGI. First, LLMs don't learn continuously. Once training stops, the model weights are frozen and the thing doesn't learn anymore. Relatedly, they don't learn from interacting with the world. Humans and other animals are constantly conducting experiments from which they get feedback. LLMs don't do this. Additionally, LLMs don't have persistent memory. It might turn out that a generally intelligent system needs to have some – or all – of these properties. If this is true, then scaling up LLMs is like the Peter Norvig quip: when you're climbing a tree to reach the moon, you can report steady progress, until...

Expand full comment
User's avatar
Comment deleted
Jun 10
Comment deleted
Expand full comment
Peter Banks's avatar

Useful things rn? Few tbh but I think the same could have been said about internet companies in the 2000s.

The useful ones will expand and devour everything imo

Expand full comment