Slouching Towards Sensemaking

There’s a particular quality to the confusion of our current moment that reminds me of standing in Dolores Park at dusk, watching fog roll in from Twin Peaks while the Mission stays stubbornly sunny. We’re between weather systems, between worlds. The old information order – built on broadcast towers and printing presses, gatekeepers and institutions – is visibly dissolving. The new one hasn’t quite condensed into recognizable forms yet. We’re in the interregnum, and it’s both terrifying and thrilling.

The thing is, we’ve been here before. Sort of.

Every era has its dominant metaphor for how knowledge works. The Enlightenment gave us the mind as a blank slate, waiting to be written upon by experience. The industrial age reimagined thinking as a kind of factory, processing raw inputs into refined outputs. The computer age taught us to think of brains as information processors, running algorithms on data.

Now we have LLMs, and they’re scrambling all our metaphors. They’re not databases – they don’t store and retrieve information in any conventional sense. They’re not search engines – they don’t point you to existing content. They’re more like… meaning machines? Reality synthesizers? Semantic improvisers?

The closest analogy I can find comes from music. A jazz musician doesn’t memorize every possible melody. Instead, they internalize patterns, relationships, and structures that let them generate novel-but-coherent musical phrases in real time. LLMs do something similar with language and concepts. They’ve absorbed the deep patterns of human meaning-making and can riff on them endlessly.

But here’s the catch: a jazz musician knows when they’re improvising. We don’t always recognize when an LLM is making things up.

To understand where we’re headed, it helps to see what’s being taken apart. The 20th century bundled several functions into single institutions:

  1. Fact-gathering and verification (journalism)
  2. Analysis and interpretation (academia)
  3. Narrative construction (media)
  4. Cultural transmission (education)
  5. Collective sensemaking (democratic deliberation)

These institutions were never perfect, but they provided structured processes for turning raw information into shared understanding. They had gatekeepers, methodologies, and accountability mechanisms. They moved slowly, but that slowness enabled certain kinds of rigor.

LLMs unbundle all of these functions and make them available on-demand, at arbitrary scale, with zero marginal cost. Want fact-checking? Analysis? A compelling narrative? Cultural context? It’s all there, instantaneously, tailored to your specific query.

This is intoxicating. It’s also dangerous. When you can generate plausible-sounding information about anything instantaneously, the very idea of “checking” or “verifying” starts to break down. The speed of generation outpaces the speed of verification by orders of magnitude.

Here’s something I’ve noticed: LLMs don’t just have biases (though they do). They have what I call an “ambient ideology” – a pervasive orientation toward reality that colors everything they produce. This isn’t explicitly political. It’s more like a set of unexamined assumptions about how the world works:

  1. Consensus is usually correct
  2. Complexity can be simplified without loss
  3. All perspectives can be reconciled
  4. Conflict is a communication problem
  5. Every question has an answer
  6. Uncertainty is temporary

These assumptions aren’t necessarily wrong, but they’re not neutral either. They shape how LLMs frame issues, what solutions they propose, what possibilities they can and can’t imagine. And because this ideology is ambient – built into the very structure of how these systems generate text – it’s incredibly hard to notice, much less resist.

When we outsource our sensemaking to these systems, we’re not just getting help processing information. We’re absorbing their implicit worldview, their way of constructing meaning. It’s like wearing tinted glasses so long you forget the world isn’t actually pink.

If information is a landscape, then understanding is about navigation – finding paths between ideas, mapping territories of knowledge, recognizing landmarks of truth. Traditional institutions created well-worn paths through this landscape. Textbooks, curricula, and canons were like marked trails – they told you which routes were safe, which vistas were worth seeing, which areas to avoid. These paths had ideological assumptions built in, sure, but at least those assumptions were relatively visible and contestable.

Social media turned the landscape into a trackless waste. Every point connects to every other point with no clear paths between them. You can teleport instantly from vaccine research to ancient aliens to your high school friend’s wedding photos. The topology is flat, directionless, disorienting.

LLMs promise to be perfect guides through this landscape. They seem to know every path, every connection, every shortcut. But they’re not guides – they’re more like dreamwalkers, creating paths as they go, generating landscapes that feel real but might dissolve the moment you look away. There’s a deeper issue here about cognitive prosthetics. Every tool we use shapes not just what we can do but who we become. Write enough and you think differently. Code enough and you start seeing systems everywhere.

LLMs are cognitive prosthetics of unprecedented power. They don’t just augment specific abilities; they offer to replace entire cognitive functions. Why struggle to synthesize ideas when it can do it instantly? Why develop your own analytical frameworks when it can generate twenty of them on demand?

The risk isn’t that these tools will make us stupid. It’s that they’ll make us differently-abled in ways we don’t fully understand. We’re trading certain cognitive capacities for others without a clear map of what we’re giving up or gaining. I see this in my own work. I reach for ChatGPT reflexively now when I’m stuck on a problem. Sometimes it genuinely helps me see new angles. But sometimes it short-circuits the productive struggle that leads to real insight. The difference isn’t always clear in the moment.

So what would better sensemaking architectures look like? I don’t think it’s about rejecting LLMs or returning to some imagined golden age of institutional authority. We need to design new feedback loops that enhance rather than replace human judgment.

So what would better sensemaking architectures look like?

I don’t think it’s about rejecting LLMs or returning to some imagined golden age of institutional authority. We need new patterns that enhance rather than replace human judgment. I can imagine tools that create productive friction – systems that make us articulate our own understanding before offering synthesis. Or platforms that foreground disagreement rather than consensus, helping us map the space of reasonable interpretations rather than collapsing to a single “answer.”

We might build systems that make their uncertainty visible, that show us not just what they “know” but the gaps and contradictions in their training. Or tools that help communities develop their own localized models, trained on specific traditions of thought rather than the undifferentiated mass of internet text.

The key insight is that sensemaking isn’t just about getting answers. It’s about developing judgment, building mental models, learning to navigate uncertainty. Good tools would support this development rather than obviate it.

Sensemaking is fundamentally social – it’s about creating shared understanding, not just personal insight. We need institutional forms that can support collective sensemaking in an LLM-saturated world.

There’s an urgency to these questions that’s hard to overstate. We’re not in a stable equilibrium that we can thoughtfully reform. We’re in a phase transition, and the new patterns are crystallizing rapidly. Every day, millions of people are developing new habits around LLMs. Students are learning to learn with ChatGPT as a constant companion. Professionals are baking AI-generated insights into decision-making processes. These practices are becoming normalized before we understand their implications.

The risk isn’t technological determinism – it’s institutional drift. We’ll wake up in five years and realize we’ve rebuilt our entire knowledge ecosystem around tools whose effects we never properly examined. The architecture of our collective sensemaking will have shifted in ways we can’t easily undo.

Despite the urgency, I’m not pessimistic. The same malleability that makes this moment dangerous also makes it full of possibility. We can build better sensemaking architectures if we’re intentional about it. Start small. In my own practice, I’ve developed rules for when and how I use LLMs:

  1. I write my own first draft before asking for alternatives
  2. I fact-check surprising claims through non-LLM sources
  3. I use multiple models and compare their outputs
  4. I regularly practice tasks without AI assistance to maintain native capacity

We need to recognize that this isn’t just a technical problem. It’s about the fundamental question of how we understand reality together. The infrastructure we build for collective sensemaking will shape everything else – our politics, our science, our culture, our capacity to face challenges we haven’t even imagined yet.

The title of this post comes from Yeats’ “The Second Coming” a poem about civilizational transformation; about old orders dissolving and new ones struggling to be born. The “slouching” in his poem captures something essential: transformation that’s inevitable but not graceful, powerful but not pretty.

That’s where we are with sensemaking in the age of LLMs. We’re not marching confidently toward a better future or sliding helplessly into catastrophe.

The question isn’t whether we’ll develop new sensemaking practices. We already are, every day, with every prompt and response. The question is whether we’ll develop good ones – practices that enhance rather than diminish our collective capacity to understand and act wisely in the world.

This requires more than technical innovation. It requires philosophical clarity about what understanding means, institutional creativity about how to support it, and practical wisdom about how to embed better practices in daily life.

We’re building the cognitive infrastructure for the next century right now — whether we’re building something that enhances human flourishing or diminishes it remains to be seen. But at least we’re beginning to ask the right questions.

The beast of new sensemaking architectures is already slouching toward Bethlehem to be born. Our task is to ensure it’s a birth we can live with.