Main content

Why it might be helpful to think of AI as a trance, for better and worse

George Lawton Profile picture for user George Lawton March 5, 2026
Summary:
AI is often framed as a technology or meme that inflates aspiration into valuation. Both can be useful, but miss something important unfolding in the world right now. Something that could be better informed by an individual and collective felt sense of various lenses into AI as a trance.

trance

It seems to be getting increasingly harder to unpack what AI is in the many conversations, stories, and applications around it. The popular frames fall short in various ways. Thinking of it as just a technology explains the mechanics but misses the felt quality of what is happening collectively. Navigating AI as a meme explains valuations but not broader collective paradoxical behavior.  

Sitting with AI as a kind of trance in the way Stephen Gilligan talks about generative or negative trances informs something else: a felt sense for holding the contradictory facets required to guide AI towards actual rather than theoretical flourishing at the level of individuals and collectives.

For example, neither the technology nor the meme frames explain why, in the recent kerfuffle between Big Government, Anthropic, and OpenAI, both companies agreed to the same thing on the surface. Yet Anthropic was theoretically kicked off the US government procurement list and also rocketed to the top of the Apple App Store. Oh, and despite being a theoretical major supply chain risk, Anthropic was also used in the ongoing Middle Eastern conflict. Nothing changed in the AI models themselves. What shifted was a collective felt sense of one company versus the other.

Other examples the standard frames struggle to unpack include:

  • The trance behind the theoretical 'SaaSpocalypse', 'secpocalypse', and 'COBOLocalypse' memes in which the AI trance is damaging existing vendors today, regardless of whether AI proves capable of safely replacing them in actuality.
  • The multi-trillion-dollar valuation of unprofitable companies that seem to be on the path toward AGI or humanoid robotics despite many false promises.
  • The interesting trance of "AI dominance" requiring more chips, more energy, or looser legal guardrails that all seem destined to erode the theoretical benefits of the implied dominance.
  • The growing rush for every company to suddenly declare itself an AI or agentic company through massive technical advancement, executive pronouncements or gamification schemes, with little attention to the felt sense of humans involved as employees, customers, and non-customers, inadvertently subjected to these systems.

Savvy AI visionaries, politicians, financial engineers, and social media artists are finding increasingly creative ways to monetize and leverage the fascination with AI, for better and worse. But underneath all of this, for any aspirational vision to actually play out, requires taking a step back to cultivate the felt sense of human beings co-creating the future we'd all like to have as a collaborative process. This is fundamentally different from humans and AI each imposing their will on the other and the environment.

Developing a felt sense of AI

Kate Crawford's Atlas of AI offers one helpful map of the territory for exploring AI's relationships with earth, labor, data, classification, affect, state, power, and space. From a felt-sense point of view, it's simpler to start with three lenses: technology, social, and interface.

The technology aspect includes all the technical things making the rounds on LinkedIn, X, and enterprise reporting: large and small foundation models, knowledge graphs, data management, GRC, security, and DevOps. This frame helps evaluate the merits and costs of one approach versus another. But there is also something like the "code smell" that quality experts talk about in software, itself inspired by Christopher Alexander's inquiry in A Pattern Language: a felt sense of how different physical design patterns shape the life of inhabitants for better or worse. The same applies to AI architecture.

The social aspect includes all the ways we're exposed to messages about AI: posts, shares, and ‘likes’ on X, Facebook, YouTube, and traditional media, and, increasingly, the financial and crypto markets, as a kind of signal layer about AI preparedness or vulnerability. The narratives people share, such as Anthropic's App Store surge or, conversely, Tesla's lagging sales despite an impressive stock price, carry a lot of the felt sense of how these things feel to different people. We're starting to coin new terms like “AI vandals” and "hubristic viscousness" to describe careless suggestions that deliberately or accidentally undermine the perceived value of otherwise helpful technology.

The interface aspect transforms AI from an ephemeral thing out there into a game that individuals and companies are all attempting to win, whether that's a new agent platform, or a Ryanair-style AI-powered experience where "winning" means not losing too much money or dignity between the outrageously low teaser rate and the dozens of confusing and degrading choice points toward recreating the experience of a legacy airline. The enterprise dimension of this felt sense lies in unpacking the disconnect between a developer’s sense of improved productivity, management’s sense of lines of code delivered, a security expert's sense of increased risk, and what actually shows up for the customer.

Learning to feel this from crypto

All of this is a little abstract. Thankfully, the crypto industry has provided an extensive library of what different dimensions of this feel like in practice. For sure, some people became fabulously rich, but pretty much every crypto thing, unpacked from conception to completion, involves a lot of carnage. Even the most recent stablecoin fad sounded reasonable at first as a more efficient way for neobanks to move money, before its most ‘useful’ function turned out to be repackaging dubious assets into the modern equivalent of the obtusely structured mortgages that sparked the 2008 financial crisis.

The recent frenzy around personal AI agent swarms had a similar quality. People are competing with themselves and each other to grow their collections of AI agents and capabilities while amplifying gamification at the expense of common sense. The fact that a senior Facebook AI safety executive accidentally deleted her main Gmail account could be read as a cautionary tale about the need for better governance. But a more nuanced reading might point toward recognizing the same felt sense inherent in the crypto field: an automagically scaled greed for knowledge, power, or bragging rights at the expense of the rest of life. As Sharon Goldman noted, people on the cutting edge were leaving parties early to keep their agent swarms occupied and running safely.

This feels like an echo of the crypto mania that promised newfound wealth and riches at the expense of life, family, and friendships. I covered this in my review of Nat Eliason's Crypto Confidential, where he realized his newfound wealth came at the expense of his serenity and family.

How to work with trance

In Generative Trance, Gilligan defines trance differently than most. Rather than a special altered state, he frames it as a state of absorbed, narrowed attention that is ubiquitous and ordinary. More significantly, this provides a way of exploring trance as a field, not just a personal state, that is co-created and can be tuned in different directions for better or worse.

He talks about negative trances such as depression and addiction at an individual level. These are akin to pressing down on the accelerator in a car while also pulling the emergency brake. When in these kinds of trances, its helpful to pay attention to the quality of internal dialogue: commentary over direct experience, unspoken fears, rational analysis at the expense of direct contact, and signs of checking out or dissociation. He argues these negative trances are not character flaws, but what happens when any individual or system becomes overwhelmed. Some version of it shows up in most manifestations of AI today, across all three lenses above.

It is worth noting that Gilligan has not addressed AI specifically. Still, his diagnosis of our current challenges points toward what we all seem to experience being amplified by the field around AI. The central challenge is the growing disassociation of the analytical “virtual mind” from the deeper organic and creative intelligence of embodied consciousness. AI, as it is currently constituted as a field, is largely a product and amplifier of this virtual mind. It excels at pure pattern matching and output with little, if any, organic grounding. The negative trances Gilligan describes is exactly what we are experiencing as this virtual mind operates in isolation from the direct human experience that informs wisdom. The trance of AI dominance, the gamification of agent swarms, and the valuation spirals are all recognizable expressions as that dissociation scales and accelerates.

The appropriate response, in Gilligan's framework, begins with cultivating what he calls COSMIC consciousness: being centered and open, developing subtle awareness, moving with musicality rather than against the grain, holding a positive intention, and staying in creative engagement with what is actually present. Notably, none of those six qualities are things a system can perform on your behalf. They have to arise from within. That is precisely the gap that no amount of AI augmentation currently bridges, and it is where the felt sense becomes not a soft preference but a hard requirement.

NLP as a cautionary parallel

I'm not advocating that companies start requiring employees to tick the box on a weekend workshop of COSMIC consciousness training. That's actually what happened with Neuro-Linguistic Programming, and it's a cautionary tale worth sitting with.

In the mid-1970s, Richard Bandler and John Grinder were doing something genuinely remarkable at UC Santa Cruz. They were modeling the felt-sense mastery of therapists like Milton Erickson and Virginia Satir to understand what made genius genius. The anthropologist and systems theorist Gregory Bateson, who was then a neighbor and intellectual godfather to the group, introduced them to Erickson directly and wrote the foreword to their first book, The Structure of Magic. Bateson and Erickson were initially enthused. But as NLP rapidly commercialized, both of their families later said they came to regret it. What had begun as a qualitative inquiry into the patterns of genius was reduced to a set of teachable techniques. The resulting learnable rules, which could be shared in a weekend workshop, captured the surface structure but lost contact with what actually informed the masters’ transformational practice.

Gilligan went the other direction. As one of the original NLP students at UC Santa Cruz, he picked up glimmers of insight from that early community, but then went on to study directly with Erickson during the last years of his life. This helped ground his insight and the community field around it in his own unique felt-sense experience, cultivated through extensive direct contact.

And what he eventually developed offers a surprisingly precise map for where AI currently sits and where it could go.

What a generative AI field might feel like

In Generative Trance, Gilligan describes three generations of trance work. The first, traditional hypnosis, treats both the conscious and unconscious minds as incompetent and programs them from outside. The second, Erickson's innovation, recognized that the unconscious carries genuine creative wisdom, but still looked to bypass the conscious mind with indirect suggestion and confusion. Neither was a true partnership. Gilligan's third generation is the thing neither wave achieved: conscious and unconscious as genuine cooperative partners, each completing the other, with neither knocked out nor bypassed.

Look at how cleanly that maps onto AI. First-generation deployment treats humans as the problem to be optimized around, programming behavior through friction, nudges, and managed outputs. The more sophisticated second wave (augmented intelligence) recognizes that human input and emergent model capabilities can be combined, but still largely routes around critical human judgment through clever prompting and engagement design. The third possibility, which almost no one in AI is seriously pursuing yet, would feel something like what Gilligan describes: a genuine co-creative field where the felt sense of human beings and the pattern-matching capabilities of AI operate as cooperative partners, each completing what the other cannot do alone.

That's the thing about AI. Each person's background, interests, and values equip them to develop a felt sense of the different aspects required to make sense of the whole and to move things in a positive direction. The modeling captured in AI statistical pattern matching is valuable. Yet scaling the broader field of AI divorced from individual and collective felt sense produces exactly the carnage we keep seeing.

My take

I've been somewhat overwhelmed by a deluge of pitches and news stories treating AI as something that can be modeled and systematized into a coherent, tidy box with a theoretically optimal mix of MCP here, knowledge graphs there, and a little GRC and security fairy dust thrown in for good measure. What I see far less of are stories and pitches about how organizations are actually learning to cultivate a felt sense toward the aspirational visions, or how to steer clear of the nightmare scenarios that Big AI likes to weave into its weekly barrage of new trances it's getting increasingly good at enrolling us in.

Few I have come across are talking about something truly aspirational, like learning to navigate AI trances to flourish at the levels of individuals, businesses, governments, and the environment synergistically. For the time being, I'm keeping my eyes open for breadcrumbs that point in this direction: empowerment, flourishing, autonomy, wellbeing, motivational interviewing, human-centered design, felt sense. I'll let you know when some of them look interesting.

In the meantime, a line from Nikola Tesla that Dan Brown used to open The Secret of Secrets seems worth sitting with:

The day science begins to study non-physical phenomena, it will make more progress in one decade than in all the previous centuries.

Brown was opening our minds to the science of psychic phenomena, which he says are all real, even though the story is fiction. But I feel Tesla was pointing to something much deeper. He had a unique capacity to cultivate a felt sense of the physical aspects of technology as fields that shape virtually every aspect of our lives today. Maybe he was pointing towards how a similar approach might inform our understanding of the subtle trance phenomena underpinning collective flourishing or malaise.

Image credit - Pixabay

Loading
A grey colored placeholder image