Science Fiction: The emotional API

I finished reading Seveneves last month. Neal Stephenson spends 880 pages describing, in meticulous technical detail, how humanity might survive the moon exploding. The orbital mechanics are precise. The genetic engineering is plausible. The social dynamics are carefully reasoned. But what stayed with me wasn’t the hard science. It was the feeling of watching the moon break apart in the night sky, knowing you have two years before the fragments rain down and sterilize Earth. That specific weight in your chest when you realize the world your children will inherit has suddenly, irrevocably changed.

Stephenson got the feeling exactly right. Not the technology – we’ll probably never need to arklet-swarm our way off a dying Earth. But that feeling of staring at an inevitable catastrophe approaching in slow motion? We’re living that now with climate change. The emotional preparation was more valuable than any technical blueprint.

This is what I’ve come to think science fiction really does: it’s not prediction or even imagination. It’s emotional documentation for futures we haven’t lived yet.

Consider the emotional journey of every major technological shift. Before we get the technology, we get the feelings. Science fiction doesn’t predict the future so much as it pre-processes the emotional responses we’ll have to that future.

Frankenstein wasn’t really about reanimating corpses. It was about the feeling of creating something that might destroy you. That feeling – creator’s horror at their creation – turned out to be incredibly useful emotional preparation for everything from nuclear weapons to social media algorithms.

We needed to feel that feeling, to name it and examine it and build cultural antibodies against it, before we build technologies that could actually instantiate it.

This is what SF does best: it’s an emotional API for technological change. It gives us standardized ways to access and process feelings about futures that don’t exist yet.

Science fiction creates a commons of shared anxieties and aspirations. When we say something is “Orwellian” or “Kafkaesque” or “like Black Mirror,” we’re not making technical comparisons. We’re invoking emotional templates.

These templates are incredibly efficient. I can say “this feels very Minority Report” about a predictive policing system, and you know I’m not talking about the specific technical implementation. I’m talking about the cocktail of feelings: the unease of being judged for crimes not yet committed, the helplessness against algorithmic certainty, the vertigo of time loops and predetermined futures.

We’ve pre-loaded these emotional responses through fiction. By the time the actual technology arrives, we have a vocabulary for discussing how it makes us feel.

The most influential sci-fi is often wrong about everything except how it feels. Star Trek got virtually every technical detail wrong about the future. But it got the social dynamics of diverse teams working together on complex problems exactly right. It prepared us emotionally for globalized, multicultural workplaces decades before they became common.

William Gibson’s Neuromancer is laughably wrong about how cyberspace actually works. But the feeling of disembodiment, of consciousness untethered from flesh, of identity fluid and multiple – those feelings turned out to be perfect preparation for social media and online identity.

The technical failures don’t matter. The emotional accuracy does.

There’s a weird discomfort when reality catches up to sci-fi. Not because the technology matches – it never quite does, but because we’re finally living the feelings we’ve been rehearsing.

The current AI moment is uncanny precisely because we’ve been emotionally preparing for it for so long. We’ve read the stories, watched the movies, internalized the feelings. Now we’re living them, and it feels both exactly right and terribly wrong.

It’s right because the emotional beats are playing out as expected: the wonder, the fear, the existential vertigo. It’s wrong because the actual experience is both more mundane and more strange than any story prepared us for.

Sci-fi also functions as a kind of cultural exposure therapy. We experience apocalypses, dystopias, and existential threats in small, safe doses. We build up emotional resistance. This isn’t always healthy. Black Mirror has probably created more technophobia than techno-wisdom. But at its best, SF helps us process civilizational-scale anxieties before they fully manifest.

Climate fiction prepared us emotionally for climate change in ways that scientific papers couldn’t. Not by predicting specific outcomes, but by letting us feel the grief, anger, and determination in advance. By the time the fires and floods arrived, we’d already done some of the emotional work.

But science fiction also traps us. We expect the future to look like the futures we’ve seen. We’re disappointed when it doesn’t arrive with the right aesthetic.

Where are the flying cars? The silver jumpsuits? The gleaming cities? We got the surveillance state but it looks like strip malls and data centers, not gleaming towers and blinking lights. We got the corporate dystopia but it has better PR.

This aesthetic disappointment blinds us to the actual futures we’re living. We’re so busy looking for the visual markers of “the future” that we miss the profound shifts happening beneath the surface.

Reading Seveneves during a time of actual slow-motion catastrophe gave me a new appreciation for what science fiction writers are actually doing. They’re not futurists or prophets. They’re something more like emotional architects, building spaces where we can safely experience feelings we’ll need later.

Every generation needs its own emotional documentation. The feelings we need to process about AI aren’t the same ones Asimov explored with robots. The feelings we need about genetic engineering aren’t the ones Huxley gave us in Brave New World.

We need stories about what it feels like to watch an AI develop capabilities faster than we can understand them. What it feels like to realize your own cognition is the slow legacy system. What it feels like to love something that might not be conscious, or to discover that consciousness might not matter the way we thought it did.

We’re living through a hinge moment in history. The next decades will bring changes as profound as the industrial revolution. We need science fiction more than ever, not for its predictions but for its feelings.

The moon probably won’t explode. But something else will – some certainty we’ve built our lives around will shatter just as dramatically. And when it does, we’ll reach for the emotional templates that SF has given us. We’ll say “this feels like…” and reference some story that got the details wrong but the feelings right.

Science fiction isn’t about predicting the future. It’s about feeling the future. It’s about building up the emotional infrastructure we’ll need to navigate whatever comes next. The stories we tell about impossible tomorrows are how we prepare our hearts for the impossible today that’s surely coming.

The future won’t look like we imagine. It never does. But if we’re lucky, we’ll have felt it all before.