The Vanishing Point

Jason Williamson 14 min read
Table of Contents

What if the data that bankrupts strategy is the data you literally can’t keep in mind?

The Space Where Memory Should Be

Mission Control, January 17, 2003. An engineer points at a blur on the screen—foam from the external tank striking Columbia’s wing. The observation is noted. Discussed. Filed.

Sixteen days later, Columbia breaks apart over Texas.

Between the seeing and the disaster lies a space that haunts me. Not a gap where someone failed to act, but something stranger—a void where memory failed to form. That gap—call it the Vanishing Point—repeats in every boardroom, every risk committee, every strategic review. The foam strike was witnessed, documented, entered into the record. But it never quite cohered into knowledge that could shift decisions, alter trajectories, save lives.

This is a ghost story, but not the kind you’re thinking of. It’s a story of things that were never properly born into organizational memory. Of knowledge that exists in a twilight state—neither fully present nor properly absent. Of what happens when institutions develop systematic ways of unseeing what they’ve seen.

The Presence of Absence

Let me pose a question that sounds simple until you really think about it: How do you know what you’ve forgotten?

Not in the trivial sense—we all know the frustration of a name on the tip of the tongue, the forgotten reason we entered a room. But in the deeper sense: How does an organization know what it no longer knows?

The French philosopher Paul Ricoeur spent years wrestling with this paradox. Writing about individual memory, he gives us a language we can scale up to institutions. Memory, he argued, is fundamentally haunted by what he called “the presence of absence.” We don’t simply forget things—we carry within us the ghostly impression of what we’ve forgotten, a kind of negative space that shapes us as surely as what we remember.

Think about your own experience. Sometimes you’ll have a nagging feeling that you’re forgetting something important. Not a specific thing—if you knew what it was, you wouldn’t have forgotten it. But your mind detects its own lacuna, senses the void where knowledge should be. Ricoeur called this the “aporia of memory”—the impossible position of being aware of an absence without being able to fill it.

Now scale this up to an organization of thousands, with decades of history, terabytes of data, generations of employees. What voids is it carrying? What absences make their presence felt in every meeting, every decision, every strategic pivot?

The unsettling answer: it has no way of knowing.

An individual brain is a unified system. It can sometimes detect its own gaps, feel the shape of its own forgetting.

But an organization’s memory is distributed across minds, systems, processes. When it forgets, no single component feels the absence. The void is shared, dispersed, invisible to any particular observer.

The void had gravity.

This is what happened with Columbia. NASA didn’t forget about foam strikes in the way you forget a phone number. The information persisted—in databases, in reports, in the minds of individual engineers. But it failed to cohere into institutional memory, failed to achieve what we might call “memorial weight.” It existed in a strange state: known but not remembered, documented but not alive in the organizational mind.

Consider Wells Fargo, 2016. For years, employees had been opening unauthorized accounts. The practice was documented in ethics hotline calls, noted in exit interviews, visible in data anomalies. But this knowledge couldn’t stick—a textbook case of motivated forgetting. It violated the narrative too fundamentally. Each signal dissolved before it could form a pattern.

The absence shaped everything. Every sales meeting where quotas went unquestioned. Every audit that didn’t probe the right questions. Every executive presentation that showed only growth. The knowledge that wasn’t there bent decisions around its gravitational pull.

The Antimemetic Organization

Why it matters: Some knowledge literally cannot survive in your organization’s ecosystem—and you’ll never know what you’re missing.

In 2008, a science fiction writer named Sam Hughes introduced a concept that’s been crawling around in my brain ever since. He imagined “antimemes”—ideas that resist being known, concepts that by their very nature evade memory and communication. Not secrets that someone hides, but knowledge that hides itself.

Hughes was writing horror fiction, imagining entities so alien that human minds couldn’t hold them. But transpose this to organizational life, and something clicks. Organizations are constantly generating knowledge that seems to evaporate on contact with collective consciousness. Not because anyone suppresses it, but because certain kinds of information are fundamentally incompatible with how we’ve built our systems for knowing.

Think about the architecture of organizational memory. We’ve designed it for efficiency, for clarity, for actionability. We build dashboards that show green lights and red lights. We write reports with executive summaries and clear recommendations. We create systems that surface signals and filter noise.

But what about knowledge that doesn’t fit this architecture? What about the foam strike that registers as neither green nor red but some unsettling shade of grey? What about the pattern that only emerges across silos, visible to no single dashboard? What about the risk that manifests not as a spike but as a barely perceptible drift?

This is antimimetic knowledge—information that organizational immune systems reject not because it’s false but because it’s indigestible. Let me break down why certain knowledge simply cannot stick:

Narrative Compression: Organizations, like individuals, are storytelling machines. We make sense of complexity by fitting it into stories—rising action, conflict, resolution. But some knowledge is irreducibly particular, refusing to conform to narrative shapes. The seventy-nine foam strikes before Columbia didn’t tell a story. They were just data points, scattered across time, each one unique in its specifics, together forming a pattern too subtle for story.

Category Dirt: Mary Douglas taught us that human societies organize reality through classification, and that “dirt” is simply “matter out of place”—stuff that doesn’t fit our categories. Antimimetic knowledge is cognitive dirt. The foam strikes were neither “safety incidents” (no one was hurt) nor “normal operations” (something did break off). They existed in the liminal space between categories, making them almost impossible to process through normal channels.

Orphaned Ownership: Maurice Halbwachs showed us that memory is never individual—it’s always collective, always social. We remember through others, with others, as others. But antimimetic knowledge often lacks a natural constituency. Who “owns” the foam strike issue? Safety? Engineering? Operations? When knowledge belongs to everyone and no one, it belongs to the void.

Motivated Forgetting: This is perhaps the most insidious quality. Organizations, like individuals, unconsciously forget information that would be psychologically costly to remember. For NASA, truly remembering the foam strike pattern would have meant questioning the entire Shuttle program’s safety assumptions. The knowledge wasn’t suppressed; it simply failed to achieve the activation energy required to overcome institutional inertia.

The stakes are not merely operational. They’re moral.

Here’s what makes this genuinely unsettling: antimimetic knowledge isn’t rare. It’s everywhere. Every organization is swimming in information that its cognitive architecture cannot metabolize. This knowledge exists. It’s in your systems right now. But it might as well be written in invisible ink, because your organization literally cannot see it. Not won’t see it—can’t see it.

The Recursive Abyss

Why it matters: Your organization doesn’t just forget—it forgets that it forgets, creating a blindness to its own blindness that compounds exponentially.

But here’s where it gets truly vertiginous—where the philosophical vertigo kicks in. Organizations don’t just forget. They forget that they forget. They lose track of their own losing track.

This is the phenomenon I call recursive forgetting, and it’s what turns simple amnesia into pathology. It works like this:

Level 1: Primary forgetting. An organization fails to retain some piece of knowledge. The foam strikes are noted but don’t cohere into institutional memory.

Level 2: Secondary forgetting. The organization forgets that it once knew this. The fact that foam strikes were ever a concern fades from awareness.

Level 3: Tertiary forgetting. The organization forgets that it’s capable of this kind of forgetting. It loses awareness of its own memorial limitations.

Each level compounds the problem exponentially. Once an organization reaches Level 3, it exists in what I can only describe as a state of memorial unconsciousness—unaware of its own unawareness, blind to its own blindness.

I spoke with a consultant who specializes in organizational learning. She told me about RegionalBank (a mid-tier US retail bank—name changed for confidentiality) that had weathered the 2008 crisis relatively well due to conservative risk management. By 2015, they were trying to understand why they were so risk-averse compared to competitors. They hired her to investigate their “cultural risk aversion problem.”

“What I found was recursive forgetting in action. They had forgotten the specific close calls that led to their risk protocols. They had forgotten that these protocols were responses to close calls. They had forgotten that they had ever been anything other than conservative. They had forgotten that their conservatism was learned rather than inherent.”

“They were trying to fix what had saved them. But they couldn’t see that because they’d forgotten the forgetting.”

This is the nightmare scenario of organizational memory: not just amnesia but anosognosia—the condition where patients can’t recognize their own cognitive deficits. The organization believes its memory is functioning normally because it has no memory of it functioning differently.

The psychologist Daniel Schacter identified seven “sins” of memory—transience, absent-mindedness, blocking, misattribution, suggestibility, bias, and persistence. But recursive forgetting might be the eighth and deadliest sin: meta-forgetting, losing the ability to track what you’ve lost.

How can an organization learn from mistakes it doesn’t remember making? The answer is: it can’t. And worse—it doesn’t know that it can’t.

The Architecture of Absence

To understand how this happens, we need to examine the architecture of organizational memory itself. Unlike individual memory, which exists in a single, integrated system, organizational memory is distributed across multiple reservoirs, each with its own physics of forgetting:

Memory LayerRecall StrengthFailure Mode
Individual MindsRich but mortalEmployees leave, taking memories; knowledge stays locked in heads
Cultural PatternsPersistent but low-fidelityPreserves gist, loses crucial details where danger lurks
Social NetworksExcellent transmission, poor preservationKnowledge flows through like water through a sieve
Physical SpacesEmbodied but fragileOffice moves erase more knowledge than realized
Formal SystemsPerfect but meaninglessCan retrieve foam strike data but can’t weight its significance

Each system has its own failure modes. But the real problem emerges from their interaction—or lack thereof. Organizational memory isn’t just distributed; it’s fragmented. The database doesn’t talk to the culture. The social network doesn’t update the procedures. Individual knowledge doesn’t automatically become collective wisdom.

This fragmentation creates “memorial gaps”—spaces between memory systems where knowledge falls and disappears. These gaps aren’t empty. They’re filled with traces, with the marks left by absent knowledge. The organization behaves differently because of what it’s forgotten, even though it can’t say what that is.

What Forgetting Feels Like

What does it feel like to be inside an organization that’s forgetting something crucial? Usually, it feels like nothing at all. That’s the insidious nature of antimimetic knowledge—it doesn’t announce its absence. There’s no alarm that sounds when important information fails to stick.

But sometimes, if you know what to look for, you can sense the presence of absence. I interviewed employees at various organizations about these experiences:

“It’s like trying to remember a dream. You know there was something important, but the harder you try to grasp it, the more it slips away.”

“We keep having the same conversations. Not similar conversations—the exact same ones. But nobody seems to notice.”

“Sometimes I’ll search our wiki for something I’m sure we documented, but it’s not there. Then months later I’ll find it, but by then we’ve already repeated the mistake.”

The Nagging Notion: That subtle discomfort in meetings when everyone agrees too quickly. The sense that something important is being overlooked but no one can say what.

The Recurring Reprise: When the same “unexpected” crisis keeps happening. The feeling of déjà vu that no one quite acknowledges.

The Orphaned Artifact: The detailed report that everyone references but no one has read. The protocol that exists for reasons no one remembers.

The Phantom Pattern: The feeling that the organization once knew how to handle this situation, had solved this problem before, but the knowledge is somehow just out of reach.

This is the lived experience of being inside an organization with antimimetic knowledge—a constant, low-level cognitive dissonance, a sense of swimming through fog that no one acknowledges is there.

The Cruel Paradox of Documentation

Here’s a cruel irony: we live in the most documented age in human history. Every email archived, every meeting recorded, every decision tracked. Yet organizational forgetting seems to be accelerating. Why?

The answer lies in understanding the difference between information and memory. Information is data points. Memory is the connective tissue that gives those data points meaning. More logs don’t equal more insight—they often mean more noise drowning out the signal.

When organizations confuse documentation with memory, they create what I call “write-only memory systems”—archives where information checks in but never checks out. The foam strike data was meticulously documented. But documentation without activation is like a library without readers—technically preserving knowledge while functionally forgetting it.

Worse, these archives create a false sense of security. “It’s all in the system,” we say, as if the system could remember for us. But systems don’t remember—people do. And when we outsource our memory to systems, we risk a kind of cognitive atrophy.

The Weight of What We Cannot Hold

There’s an ethical dimension here that we rarely discuss. When an organization forgets safety warnings, dismisses risk patterns, or loses track of its own history, it’s not just making a cognitive error. It’s failing in a fundamental responsibility.

The philosopher Emmanuel Levinas argued that ethics begins with the face of the Other—the irreducible particular that makes claims on us. What if we understood antimimetic knowledge this way? Each foam strike, each anomaly, each warning is a kind of face turned toward the organization, asking to be seen, demanding response.

When organizations develop systematic ways of not-seeing, they’re not just failing epistemologically (in knowing) but ethically (in responding). The recursive forgetting that prevents them from learning from the past also prevents them from taking responsibility for the future.

We owe the ghosts their hearing.

In the Space Between

So where does this leave us? If organizations are structurally predisposed to forget, if certain knowledge is inherently antimimetic, if forgetting compounds recursively—is there any hope?

I don’t have easy answers. The problem runs deeper than better databases or knowledge management systems. It’s baked into the nature of collective cognition, the architecture of how groups of humans attempt to know things together.

But recognizing the phenomenon matters. You can’t address a problem you don’t know exists. And most organizations don’t know—can’t know—that they’re forgetting, because they’ve forgotten their capacity to forget.

The NASA engineers who worried about Columbia’s foam strike weren’t ignored. They were heard, acknowledged, and then the knowledge entered that strange twilight—present but not present, known but not remembered.

Somewhere in your organization, right now, similar knowledge is forming and failing to form. The presence of absence is shaping decisions in ways no one can see.

In the space between knowing and remembering lies the vanishing point—where organizational knowledge goes to disappear. We can’t eliminate that space. But maybe we can learn to see it. Seeing that point is how we keep the ghosts from steering strategy.


Next: Part 2 - The Gravity Well: “Forgetting has physics.” We’ll model forgetting like gravity itself—and map the mass that bends strategic space-time around what organizations cannot remember.

For more insights on navigating organizational complexity beyond conventional wisdom, contact us at info@eudexio.com.