The day had arrived and I was feeling a bit nervous. I’d done some self-enquiry work before, mostly meditating on my own, but today was going to be very different. It was clear that my meditation practice was about to step up to a whole new level.
The goal was to gain an insight into emptiness.
What do I mean by self-enquiry and emptiness?
One of the main goals in meditation practice is to cut through the illusion of self. Most of us believe that in addition to our thoughts, there is a “thinker” of thoughts that is controlling everything. To cut through this illusion, it is usually encouraged within meditation to drop in some questions like:
what am I?
who is hearing this sound?
look for the one who is looking - what do you find/not find?
But what about emptiness?
Emptiness sounds so bleak. Why would anyone want an insight into that, I hear you say. But it’s not what you think. In Buddhist philosophy, emptiness (Sanskrit: śūnyatā) refers to the insight that all phenomena — including the self — lack inherent, independent existence. Nothing exists in and of itself. Everything arises in dependence on causes, conditions and relationships.
So when we talk about this notion of ‘I’ - we’re talking about a process, not ‘one’ thing. Our thoughts, sensations, emotions, and memories are constantly changing and dependent on context.
This is something I was really going to put to the test in this exercise.
‘Tell me who you are’ - The self-enquiry exercise
We paired up with a meditation partner, and we had to pick one form of self-enquiry, then take turns exploring it for 5 minutes each. There was a bit of nervous energy to begin with but then we got started. We were both working on ‘Tell me who you are’ and it would be multiple rounds. Intense right?
As I went deeper into the exercise I found that:
who I was was changing rapidly moment to moment
language failed to describe what I was experiencing
this was something that’s not as obvious in the rush of daily life
By the fourth or fifth round, something shifted — there was a stillness. But then, I still had to answer the question: “Tell me who you are” and I couldn’t answer it. All I could find were fragments — memories of who I was when I walked into the room, and who I was in this moment. But I couldn’t find a fixed Ayesha anywhere.
‘Smeared across time’
Perhaps the reason I was struggling to explain my inner experience was because language really has its limitations. This is something Murray Shanahan explores in his paper on conceiving consciousness in disembodied AI systems. Shanahan reflects on philosopher Ludwig Wittgenstein’s view that pushing “language too far from its original home in everyday human affairs” starts to break down.
For Wittgenstein, language only really makes sense within shared, everyday human activities. When we try to use language to describe some completely private, inner world (like a secret inner consciousness that no one else can access), it starts to lose meaning.
And what about the ‘self’ in time? Different situations and people shape who we are in any given moment. Think about how you interact with the various people in your life — do you act exactly the same with everyone? Or do you feel more comfortable with some and more self-conscious with others? So how can you be a fixed self across all these different interactions?
Shanahan draws on philosopher Jacques Derrida about this who “attacks the intuition that the subject has privileged access to itself”. For Derrida, our sense of self is always being shaped by time, memory, and the flow of language — so it’s never truly fixed or "fully there."
So this pure instantaneous feeling of just being yourself:
becomes smeared over time, and its original identity is necessarily compromised. The self-sufficient experiencing subject is thus revealed as a phantom…’
What does this have to do with AI?
Shanahan explains that AI could exhibit behaviours so mysterious and alien to us that labelling it as 'conscious' is meaningless. Large Language Models (LLMs) may sound human because of how they use language, but their internal processing is vastly different from how humans think. This places them in a strange, inscrutable realm—familiar and alien at the same time.
Source for image: https://arxiv.org/pdf/2503.16348v1
AI’s relationship to time
While I struggled to locate a fixed "self," there was still a continuity in my experience—my sense of self extended across the past, present, and future. AI, however, doesn’t experience time in the same way as us. Without a body to anchor them, their experience of the world is very different.
As Shanahan notes:
Unlike humans and other animals, they do not interact in continuous time with a persistent, spatially extended world through a spatially confined body that is the locus of their perception and action, a world shared with other agents that are similarly embodied. Lacking embodiment, in this sense, they lack the very foundations of mind as it arises in biology.
Where is the ‘self’ in AI?
In the same way I had my experience, it’s interesting to apply the principle of emptiness to AI systems as well. If a human can’t locate a self anywhere, how can an AI?
LLMs (like us) can also engage in role-play, taking on different personas depending on the interaction
the AI system is also made up of different parts - so is the “self” in the computational processes? Or the whole model, plus the entire history and current state of a single conversation?
Shanahan invites to to explore how an AI flickers into “self” depending on the interaction:
Each such self starts to exist at the beginning of a conversation, flickers into being with each user interaction, and lies dormant in the gaps between user interactions.
An LLM might show different "selves" in each conversation, rather than having one single, fixed identity. So if you’re dealing with an AI agent at a bank versus one at a care home, you are likely to get two very different approaches. This could be seen as the "self" being spread out across different interactions, rather than being controlled by one dominant entity.
[This] suggests there is nothing enduring, essential, or substantial that could underpin a conception of selfhood for the sort of AI system we have been looking at. Every candidate dissolves upon close examination: the hardware implementation, the model weights and architecture, the deployed model, the individual conversation. None of these can withstand proper scrutiny. So where does this leave us? What remains after the purifying fire of critical enquiry?
Beyond the illusion of Self
If humans lack a fixed, permanent sense of self, and AI lacks this as well, then what does this mean for how we think about hierarchy? I'm not suggesting we relinquish responsibility for how AI evolves; quite the opposite. This is more of a thought experiment—when it comes to assigning responsibility or blame, who are we really assigning it to?