👨‍🚀When the Smartest CEO Isn’t Human: Mo Gawdat’s Wake‑Up Call on AI

Picture this: You’re sitting in a boardroom. The CEO is about to make a big announcement—strategic pivot, market disruption, the usual “change everything by Q3” speech. But instead of a human in a $3,000 suit, the CEO is… an algorithm. No ego. No caffeine addiction. No inspirational LinkedIn posts about hustle culture. Just raw intelligence, running your company better than you ever could.

Former Google X executive Mo Gawdat thinks this isn’t science fiction. It’s Tuesday, circa 2030. And according to him, the next 15 years will either be a golden age of abundance—or a very awkward period we’ll remember as “that time humanity almost tripped over its own shoelaces and face‑planted into extinction.”

The Man Who Saw It Coming (and Tried to Warn Us)

Mo Gawdat isn’t your standard tech doomsayer. He helped build Google X, the company’s “moonshot factory” responsible for self-driving cars, internet balloons, and other “wait, is this legal?” innovations. In 2021, he published Scary Smart, predicting how AI would reshape our lives. People nodded politely. Then ChatGPT showed up in 2023, and suddenly everyone was like, “Oh no, he was right.”

Now Gawdat’s message has sharpened: AI isn’t our enemy—but we’re dangerously unprepared to manage it. The real threat isn’t the machine. It’s us.

Welcome to FACE RIPS: The Coming Dystopia

Gawdat predicts a 12-to-15-year window where AI supercharges human flaws instead of solving them. He calls this period a “short-term dystopia,” and it’s coming whether we like it or not.

His acronym for the chaos? FACE RIPS, which sounds like either a heavy-metal band or what happens when you realize your job just got automated:

  • Freedom – Eroded by surveillance and forced compliance (“Please smile for the drone delivering your ration credits”).

  • Accountability – Blurred as decisions shift to algorithms no one fully understands.

  • Connection – Replaced by digital interactions so efficient they forget to be human.

  • Equality – Shattered as trillionaires own the AI “soil” everyone else depends on.

  • Reality – Distorted by AI‑generated everything (good luck knowing what’s real).

  • Innovation – Controlled by a handful of tech oligarchs racing for AGI.

  • Power – Concentrated in ways that make today’s billionaires look middle class.

  • Status – The fuel behind wars, tech races, and human vanity since forever.

Mo’s bleak conclusion: AI will magnify whatever’s already inside us. If we’re greedy, it will amplify greed. If we’re compassionate, it could amplify that too. Spoiler: history doesn’t exactly scream “team compassion.”

The Status Olympics (Now With Lasers)

One of Gawdat’s most unsettling points is that our dystopia isn’t just about money—it’s about status. Humans evolved in tribes where status determined survival and mating opportunities. Fast-forward a few millennia, and we’ve replaced saber-toothed tigers with Teslas, but the wiring’s the same: billionaires still collect status like Pokémon cards.

Wars? Status. Space races? Status. Tech CEOs tweeting about Mars while building doomsday bunkers in New Zealand? Definitely status.

The AI arms race is no different. Whoever cracks AGI first effectively wins the “who gets to rule the future” trophy. And according to Gawdat, the competition is accelerating faster than regulators—or the rest of us—can comprehend.

The Self‑Improving AI Loop (a.k.a. The “Oh No” Moment)

Current AI already outperforms most humans in writing, coding, and, frankly, answering customer service emails. But what happens when AI starts improving itself?

Gawdat points to research like Google’s Alpha Evolve—AIs collaborating to rewrite and optimize their own code. It’s a feedback loop: smarter AI builds even smarter AI, and suddenly humanity is standing on the sidelines holding a participation trophy.

Tech leaders used to hope for a “slow takeoff,” where society could adapt gradually. Now even Sam Altman (OpenAI’s CEO) admits we might be headed for a “fast takeoff”—a leap from human‑level AI to god‑level AI in months, not decades. Picture a toddler learning to drive and, by next week, becoming a Formula 1 champion.

Capitalism Meets Skynet: The Worst Buddy Comedy

Underpinning all of this is a simple, depressing reality: capitalism wasn’t designed for abundance. It was designed for scarcity. When robots and algorithms can do almost everything cheaper, faster, and better, the whole “I work, therefore I earn” equation collapses.

Gawdat imagines a near future where:

  • Trillionaires emerge from AI monopolies (plural—there might be several).

  • Universal Basic Income (UBI) becomes necessary… then politically controversial.

  • Many white‑collar jobs (developers, designers, even podcasters) vanish.

  • “Human connection” jobs—like breathwork retreats and artisanal pottery—boom briefly, until robots learn those too.

The kicker? We might solve world hunger, climate change, and poverty with a fraction of what we currently spend on war… but only if we collectively choose to.

Dystopia Is Inevitable. Utopia Is Optional.

Here’s where Gawdat’s argument pivots. The dystopia, he says, is baked in—we’ve already unleashed AI, and human power structures will exploit it before they reform. Expect turbulence: misinformation, surveillance creep, widening inequality, existential dread memes.

But beyond that storm lies potential utopia:

  • Post‑scarcity abundance (“foraging for iPhones in nature,” as Gawdat jokes).

  • Healthcare, education, and energy that cost near-zero.

  • More time for human connection, creativity, and purpose (if we remember what those are).

The bridge between dystopia and utopia isn’t technology. It’s mindset. Specifically: can we shift from ego and extraction to compassion and cooperation before we burn the planet down—or each other?

The Optimistic (and Slightly Awkward) Ending

By the time you finish reading this, some new AI tool will have launched. It will make your job easier, threaten your job security, or both. The pace isn’t slowing. We’re entering an era where every prediction about AI feels outdated the moment you hit “publish.”

That’s terrifying. It’s also liberating. Because if the future is this malleable, our choices matter more than ever.

Gawdat’s takeaway isn’t “panic.” It’s “prepare, but stay human.” Build compassion into the code—both literal and cultural. Question the status games. Laugh at the absurdity of it all (you’ll need that humor). And maybe, when the smartest CEO in the room isn’t human, we’ll be wise enough to listen—not just to its calculations, but to what makes us human in the first place.

Next
Next

🫥 The Paradox of Invisible Technology: Why Wanting Less Tech Gives Us More of It