🧭When AI Needs a Compass (and Maybe a Philosopher or Two)

The Existential Ambush

The thing about being human is that, until recently, it was the only gig in town. Sure, dolphins had social bonds and crows could solve puzzles, but when it came to reflective self-awareness and inventing microwavable popcorn, humans were undisputed champs. Now, AI is kicking down the clubhouse door, asking awkward questions like: "So, uh... what exactly makes you, you?"

Brendan McCord’s recent talk—philosophical TED with a techno-twist—lands us squarely in the middle of that existential ambush. He compares this moment in history to those rare scientific upheavals that forced us to revise our own cosmic bio: Copernicus bumped us from the center of the universe, Darwin reminded us we're basically fancy mammals, and Einstein made us question whether time was even real. Now? AI is holding up a mirror—and it’s not entirely sure what it sees.

"AI is forcing us to ask the question: to live a flourishing human life?"

From Tool to Environment

It’s a strange question, mostly because it feels like it’s missing a verb. But the sentiment is spot on: AI isn't just some shiny tool we wield. It’s becoming an environment we live in. The algorithms that feed us content, guide our purchases, filter our job applications—these are no longer passive technologies. They shape our ends, not just our means.

This is new. The printing press never decided what should be printed. But as McCord notes, 20% of our discretionary time is now mediated by algorithms—algorithms that decide what we consume, what we care about, even what we aspire to.

The Three Camps

So, what now? Do we panic? Build a bunker? Join a philosophy reading group and hope for the best?

McCord sketches out three responses to AI’s rise, and—spoiler—two of them are basically wrong.

Camp 1: Existential Pessimists

These folks would like to hit the pause button on AI development. Think of them as the moral panic squad, convinced that if we don’t stop now, the machines will turn us into pet food. But as McCord points out, hitting pause on progress has never been a viable move. Imagine if we’d paused before Darwin. Or Einstein. Or antibiotics. We'd still be leeching people.

Camp 2: Accelerationists

This crew wants to go full throttle, hands off the wheel, eyes on the Singularity. They're so enamored with AI’s potential that they forget to ask whether humans should still be in the driver’s seat—or even in the car.

Camp 3: Let’s Call It Sanity

This is McCord’s camp. It’s not about slamming the brakes or flooring the gas—it’s about grabbing the compass.

The Three-Step Plan

He offers a three-step plan for navigating the AI wilderness, and like all good plans, it comes with metaphorical props: a North Star, a compass, and a new kind of explorer.

Step 1: The North Star = Human Flourishing

Before we ask what AI can do, we should ask what we want it to do. Human flourishing—becoming the person you aspire to be—is the star we’re meant to follow. AI shouldn’t just serve efficiency or profit; it should serve autonomy, growth, and meaning.

Step 2: The Compass = Three Anchors

  • Autonomy: If we hand over too many of our decisions to machines—what to eat, read, watch, believe—we risk becoming passive passengers. Aristotle would’ve hated that.

  • Reason: It’s not just a bonus feature—it’s our primary evolutionary trick. John Stuart Mill believed in bombarding ourselves with opposing opinions because wrestling with ideas makes us stronger. If AI spoon-feeds us agreeable content, reason starts to atrophy like a muscle in zero gravity.

  • Decentralization: McCord brings in Alexis de Tocqueville here (because what’s a good AI talk without 19th-century French political philosophy?). Tocqueville admired America’s decentralized, community-driven ethos. The danger now? That AI becomes a centralizing force—a single brain guiding billions. That’s efficient. It’s also terrifying.

Step 3: The New Explorer = Philosophical Technologists

McCord wants to build a new kind of pipeline—from philosophy to code. Because right now, most AI systems are built by engineers optimizing for clicks, speed, and profit—not flourishing. Enter the Human-Centered AI Lab at Oxford: a place where thinkers and builders actually hang out in the same room.

We don’t need AI priests or prophets. We need hybrid humans—people fluent in machine learning and moral philosophy. Imagine a technologist who can debug both code and Kant. That’s who McCord wants shaping the future.

From the Shoreline

We’re not at the Terminator stage yet. But we are—let’s be honest—standing on a weird beach, looking out at a blinking, humming sea of possibility. The tide’s coming in, and it’s bringing questions with it.

McCord’s not offering easy answers. He’s offering a compass.

And in this moment, maybe that’s exactly what we need.

Here’s the video that inspired this blog post.

Next
Next

🧱 The Path Is Built One Stone at a Time and We’re All Masons Now