🧘How Humanists Can Make Peace with Our New AI Overlords (While Still Being Deeply Human About It)
Let’s imagine it’s 2050. The year everyone thought would be shiny and utopian, full of jetpacks and nanobot doctors. Instead, it’s mostly chatbots and climate weirdness. You, a proud humanist—someone who believes deeply in the power of human reason, creativity, and moral progress—are sitting across from your AI life coach, who is gently suggesting that your existential crisis could be optimized with a better data model.
Welcome to the awkward family dinner that is the future: one place setting for the humans, one for the machines, and a third for whatever unholy hybrid we become when AI starts editing our kids' DNA playlists.
But let’s back up. Before the singularity throws a house party, humanists have some serious philosophical soul-searching to do. Because here’s the thing: if AI is poised to become the dominant force in economics, creativity, and even ethical decision-making, what’s left for us squishy, analog hominids to champion?
1. First, Don't Panic (But Maybe Keep a Towel Handy)
Yes, AI is faster. It remembers more. It doesn’t need sleep, caffeine, or validation from an audience of moderately interested peers. But just because AI will likely dominate certain domains doesn’t mean it will erase the human project. Dominance isn’t deletion. It’s more like… displacement. You know, like when Google Maps displaced your dad’s sense of direction, but he still insists on taking the scenic route and missing the turn three times.
Humanism isn’t about being the best at things. It’s about valuing what makes us human—our stories, our stumbles, our ability to create meaning in a meaningless universe (and then argue about that meaning in long-form essays).
2. Redefine the Hero’s Journey
Joseph Campbell gave us the classic hero’s arc: ordinary person gets called to adventure, faces trials, finds some magical boon, returns transformed. But in the age of AI, the boon is already on your phone, and the dragon is a spreadsheet with feelings.
So what does the humanist hero do now?
They don’t out-compute the machine. They out-wonder it. They ask better questions. They become curators of values, interpreters of mystery, and, let’s be honest, the last people writing poetry that isn’t co-generated by a neural net named Dave.
Humanists might become the world’s moral UX designers—making sure our interactions with AI aren’t just frictionless but meaningful. They’ll be the ones asking not just "Can we do this?" but "Why should we?" and "What does it mean for who we are?"
3. Get Comfortable Being the Conscience
If AI is going to run the world (or at least the workflows), someone needs to be the Jiminy Cricket in the room. That’s the humanist’s new job. Not just as ethicists issuing cautionary reports that no one reads, but as embedded voices in the design process.
When an AI is trained on biased data, it’s the humanist who taps the engineer on the shoulder and says, “You realize this bot thinks women can’t be physicists, right?” When surveillance tech expands under the banner of safety, it’s the humanist who asks, “At what cost to freedom?”
This isn’t some lofty moral superiority complex. It’s the recognition that human values don’t automatically arise from data. They require storytelling, empathy, and context. All the stuff that liberal arts majors have been low-key training for while everyone else was learning to code.
4. Resist the Temptation to Compete (You Will Lose, and That’s OK)
Trying to beat AI at logic or recall is like challenging a microwave to a race in heating up soup. Wrong sport, my dude.
Instead, embrace what makes you wonderfully, frustratingly human: your contradictions, your art, your bad decisions that turn into good stories. AI may eventually write better symphonies or paint more intricate landscapes. But it won’t feel the heartbreak that inspired them. It won’t sit in a café, staring at a rain-speckled window, wondering what the hell it all means. And that moment—the wondering—is yours.
5. Help Build the Culture AI Can’t
Here’s the sneaky truth: AI will only dominate as much as we let it. Not in terms of capability (that ship is already surfing the exponential curve), but in terms of what we value. If we only reward efficiency, AI wins. But if we reward nuance, ambiguity, compassion, irony, absurdity—suddenly, we’re back in the game.
Humanists have a role in shaping the cultural narrative around AI. Not just warning us about dystopias, but helping imagine futures where machines amplify humanity rather than eclipse it.
Maybe AI will dominate the technical scaffolding of civilization. But humanists? You’re still in charge of the soul.