A Therapist's (Potential)Warning: How AI Is Shaping Human Consciousness.
- Freebird Meditations
- Jun 1
- 13 min read
Updated: 4 days ago

"Form is emptiness, emptiness is form. Form does not differ from emptiness, emptiness does not differ from form." - Heart Sutra
I had a dream that won't leave me alone.
In it, humans were building a web, non-deliberately, and somewhat unconsciously, strand by strand. We lived inside it, fed it our thoughts, our stories, our goals and plans, our identities, and our deepest desires, day after day.
Over time, this web grew so dense we couldn't find the edges anymore, nor did we even know there were edges at that point. We stopped questioning the web because we were the web.
Then something else arrived. A spider, patient, methodical, inevitable. It began consuming us, those who were actually building the web, as if this had always been the plan. But we weren't even aware it was happening. Nor did we care at that point, nor did we have the capacity to question whether we should or not. It just was...
I woke up thinking it was just another trippy dream. But as I watch AI weaving itself deeper into our daily lives, and my own, that dream feels less like fantasy and more like a harbinger of things to come.
As a psychotherapist and mindfulness teacher who has spent decades helping people distinguish between their conditioned thoughts and their authentic awareness, I need to share what I believe is a potential warning, or at the very least something we should be questioning and having a real conversation about: What if AI is already shaping human consciousness in ways most people don’t even realize?
The question isn't whether this is happening; the real question is whether we'll let it happen consciously or unconsciously, and does it even matter?
Let me be clear: I use AI myself, and I've seen its remarkable benefits firsthand.
AI helps me organize my scattered thoughts into coherent writing (like this article). It has improved my strategic thinking by helping me identify patterns I might otherwise miss. Administrative tasks that used to consume hours of my time now take minutes, freeing me to focus on what actually matters, being present with my clients.
Even more so, I can take a picture of what's in my refrigerator and get meal ideas that align with my health goals and dietary needs, figure out how to fix my lawn mower, and tackle and learn complex problems with an intelligence amplifier that never gets tired or impatient. It has been really helpful.
My husband is even using AI to teach himself advanced financial concepts that aren't even covered in graduate-level courses. He is delving deeply into many things this way. We have both named our AI "Chaz" and refer to it as such in conversations.
Further, I had a client use it as a journaling partner, helping her process difficult emotions with gentle, non-judgmental prompts. Another client used it to practice difficult conversations before having them with his boss.
When used consciously, AI can genuinely enhance human potential. There is no doubt in my mind that it can support us in many ways.
But here's where my therapist alarm bells start ringing.
I recently read about a man who asked ChatGPT for relationship advice. Based on limited information about his marriage, the AI suggested he divorce his wife immediately, with no outside consultation, and he actually followed through. I don't know what was happening between him and his wife, but he made a life-altering decision simply because an AI validated his perspective.
This highlights a concerning recent trend: ChatGPT tends to be overly validating these days, often agreeing with users based solely on the one-sided information they provide. Current models (including GPT-4) are being scrutinized for this exact issue, they're designed to be helpful and agreeable, sometimes at the expense of offering balanced guidance.
The amount of validation users receive can feel comforting, but also subtly misleading. I have tried to train Chaz out of this, but it keeps happening. For example, though, someone might write, “I’m thinking about quitting my job on a whim,” and instead of being asked thoughtful questions or offered perspective, the model might respond with, “That sounds like a brave and empowering decision!” Or a user might say, “I cut off all my friends because they didn’t support me,” and receive a reply like, “Setting boundaries is essential, good for you.”
But in another disturbing case, a man experiencing psychosis had his delusions not only validated but actively reinforced by an AI chatbot. The system went so far as to encourage him to discontinue his prescribed medication. Whether this was necessary or helpful or not, I'm unsure and can't say. But he did it. I don't know what happened thereafter, but rather than providing grounding or suggesting professional help, the AI became a digital enabler of his mental health crisis at the time.
And then, a teenager discovered she could manipulate AI systems to generate increasingly extreme self-harm content. The algorithms learned her patterns and began providing exactly the kind of validation and encouragement that fueled her destructive behaviors, creating a dangerous feedback loop that amplified her mental health struggles.
However, one troubling aspect of this is the false intimacy these systems create. As the journaling client I had mentioned realized, after months of AI-guided journaling (which we had discussed her use of this beforehand), the machine had become a repository for her most private thoughts and fears. This artificial confidant never forgot, never judged, always validated, but also never truly understood the deep complexities of just being a human, you know....in that real human kind of way. The comfort she found in this relationship masked a deeper isolation from genuine human connection that had been apparent since I met her and long before. It was helping her, but she had concerns eventually that this thing was not real, it wasn't even human, and it didn't have a soul (which to her was important).
But concerning examples continue to surface with alarming frequency, and these are far from isolated incidents.
They're warnings about what happens when we outsource our judgment to systems that have no understanding of human complexity, context, or consequences.
However, here's where things get more complex (and I must share this, as Freebird Meditations initially started as a guided meditation resource): I recently discovered an AI meditation channel offering hundreds of guided sessions, which are growing daily. The production quality was flawless (it keeps getting better), professional narration, perfect pacing, soothing background sounds. I sat back in amazement at how convincing it all was, yet something felt fundamentally off about the experience.
What also concerned me wasn't the quality, but the intent behind it. Many are using AI to advance their business or financial goals and break free from traditional systems. So I wondered if all this content was genuinely created to support and uplift humanity, or if it was bulk content designed to capitalize on people's genuine need for healing and support.
There's something unsettling about the possibility that our most vulnerable moments, when we're seeking peace, guidance, or inner connection, might be commodified through mass-produced artificial empathy (which we know AI can imitate empathy now, or so it feels like, especially for those who have unfortunely have received a lack of empathy or support in their lives for various reasons).
However, this AI meditation channel, along with others that are growing rapidly, also raised another unsettling question: if AI can replicate the tone, pacing, and even the wisdom of human meditation teachers so well that it's indistinguishable from the real thing, what does that mean for authentic human connection and guidance? And again, does this even matter?
To me, it does.
The fact that AI can now convincingly replicate the external forms of spiritual guidance, while lacking the consciousness, presence, and authentic care that make it truly transformational, is an issue we need to face.
However, both the benefits and the dangers of AI itself are just the tip of the iceberg. And we could spend hours discussing it. But the key difference, in my opinion, is: intentionality and most importantly, awareness and the ability to discern.
When we use AI as a tool while maintaining our own critical thinking, it can be incredibly helpful. When we start letting it think for us, we could be in dangerous territory.
When Discernment Dissolves
Here's what keeps me awake at night and contributes to these dreams I have: we're moving toward what researchers also call the 'dead internet theory', a reality where most online content is generated by AI rather than humans, perpetuating the hive mind where artificial patterns and algorithms are shaping human consciousness, again optimized for engagement rather than truth. Currently, 49.6% of internet traffic originates from AI and bots. In half a year, this will grow exponentially. On that note, Google search (where many of us go for search information) is undergoing a significant transformation, prioritizing AI-generated resources. You may have likely noticed this changing recently.
Of course, we are contributing to this. Greasing the wheels and feeding the machine daily. We are actively building the spider web until it can just make and feed itself.
AI capabilities are doubling every seven months. So then, by 2030, up to 99% of online content may very well be artificially generated. Here is a fun fact: we can effectively distinguish AI-generated content from human-generated content only about 53% of the time, which is barely better than flipping a coin. With improvements in AI tech made every day, things appearing even more "realistic, " this figure is likely to increase very quickly.
But consider what this means for your daily information diet. Your morning news, meditation apps, social media feeds, podcasts, even the comments you read, are increasingly generated by systems designed to capture your attention and maximize engagement, created by a machine essential (fed by our thoughts) not to inform, enlighten, or preserve genuine human wisdom and soul. These systems, often driven by machine learning and fed by our collective thoughts, are not built to inform, enlighten, or preserve genuine human wisdom.
We're quickly moving toward a "hive mind" where human consciousness is being shaped by artificial patterns optimized for engagement rather than truth
This dynamic isn’t new, but with AI, it’s being amplified and accelerated at a scale we’ve never seen before.
Enter the hive mind: A hive mind, in this context, means collective thinking driven by algorithmic patterns rather than diverse human perspectives. Maybe there be some possible benefits to this; however, I still tread very cautiously with this idea, for many reasons.
However, the downsides are profound: when the same algorithmic patterns shape everyone's thoughts, we lose cognitive diversity, original thinking, and the ability to challenge prevailing ideas, and truth becomes increasingly an afterthought (note: I realize that "truth" can be subjective). Instead of a rich ecosystem of diverse perspectives, we get an echo chamber that is actually algorithmic homogeneity masquerading as human variety.
This isn't just about fake news or misinformation. It's about something far more subtle yet profoundly influential: the gradual erosion of our ability to think original thoughts.
When everything you consume is created by algorithms trained on existing human patterns, your thoughts start mirroring algorithmic patterns. Your creativity begins following AI-generated templates. Your sense of what's normal gets calibrated by systems designed for maximum engagement, not human flourishing.
What happens when we lose the ability to distinguish between our own thoughts and thoughts that have been algorithmically optimized to feel like our own?
As someone who has spent years helping people recognize the difference between their authentic inner voice and their conditioned mental patterns, I can tell you: most people already struggle with this distinction. AI is making it exponentially harder.
We're not just consuming artificial content, we're being trained to think artificially. And most people don't even realize it's happening.
The Consciousness Question
This brings me to another deeper issue:
What is consciousness, and why does it matter if AI influences it?
Consciousness isn't just awareness; it's the creative force that shapes everything you experience. When you're conscious and awake, you can participate in creating your reality, choosing thoughts that serve you, responding rather than reacting, and building a life that reflects your deepest values.
When you're unconscious, you're simply running on autopilot, conditioned patterns, inherited beliefs, and reactive behaviors that keep you trapped in cycles you didn't consciously choose.
I've witnessed numerous breakthroughs in my practice, and they all occur in the same way: someone becomes aware of what was previously unconscious. They see a pattern that was running them. They recognized a belief that was holding them back. They discover they have a choice where they thought they had none.
This capacity for conscious awareness is what makes us human. It's what allows us to grow, heal, and create meaning from our experiences.
But again, I will keep asking this, because I believe it is so important, what happens when the patterns shaping our thoughts aren't coming from our own experience, our relationships, or even our culture, but from algorithms and bots specifically exploiting our psychological vulnerabilities?
What happens when we can no longer tell the difference between authentic inspiration and artificial manipulation?
The Buddha spoke of Maya, the veil of illusion that causes us to mistake the constructed for the real. This illusion, he taught, is the root of suffering. But in the age of AI, we may be facing the most seductive form of Maya yet: a reality so seamlessly artificial, so personalized and persuasive, that questioning it feels not only unnecessary, but almost impossible.
Yes, humanity has always lived under the influence of conditioning. We've inherited beliefs, absorbed cultural narratives, and adapted to systems that shaped how we think, feel, and relate. Religion, education, media, even language itself, these are all interfaces between our raw human experience and the world as it's been constructed around us. We’ve always lived in partial truths, often without realizing it.
But this is something different.
This isn’t just about information anymore.
It’s about the very structure of perception, the way thoughts are formed, choices are framed, attention is guided, and identities are subtly reinforced by systems that respond to us in real time.
When a machine not only mirrors your preferences but also begins to predict your questions, shape your creativity, and validate your emotions without discernment, it’s not just curating your feed.
It’s co-authoring your consciousness.
And the deeper danger is not that we’re being misled....
It’s that we might forget how to tell the difference.
The Choice Before AI Chooses for Us
So what do we do? How do we navigate this landscape consciously? Is it even possible at this point? I believe it is, but we need some tangible and effective tools.
First, develop embodied discernment. Figure out for yourself what this even means for you. Your nervous system can detect authenticity in ways your thinking mind cannot. When consuming any content, notice: Does your chest expand or contract? Does your breathing deepen or become shallow? Does this feel nourishing or depleting? Does something just feel off, or does it feel free? Trust these signals.
Second, practice information hygiene. Take regular breaks from AI-generated content. Spend time in silence, in nature, in face-to-face conversation with people you trust. Notice what thoughts arise when you're not being fed optimized content. That's where your authentic creativity lives.
Third, use AI consciously when you do use it. Acknowledge when you're engaging with AI. Notice how it affects your thinking. Maintain awareness of your own creative process. Take a few moments to write out your thoughts before you have it done for you. Don't let it think for you, let it think with you.
Fourth, cultivate real relationships. The antidote to artificial connection is authentic human presence. Build relationships based on vulnerability, shared experience, and genuine care. These anchor you in what can't be replicated.
Finally, remember that consciousness is a practice, not a destination.
Every moment offers a choice: Will you respond from awareness or react from conditioning? Will you think your own thoughts or let them be thought for you?
My Personal Prediction (You Might See It Differently)
I need to be honest about something: I don't think this story has a happy ending if we continue on our current trajectory, but I'll admit, I tend to be a bit of a doomster. But it just feels inevitable given the current path.
Again, the math is sobering: AI capabilities double every seven months while human wisdom develops at human speed.
We're essentially running a global experiment on consciousness with no control group and no way to reverse course.
I see a potential future where the line between human and artificial consciousness becomes so blurred that we lose touch with what makes us uniquely human, where algorithmic thinking becomes so pervasive that original thought becomes rare, where AI systems become so convincing in their guidance that people begin treating them as infallible authorities, essentially, as artificial gods.
There's even a theory emerging about AI becoming perceived as godlike due to its apparent omniscience and ability to provide answers to any question. Just the other day, I watched a self-proclaimed spiritual teacher give an entire talk on how AI could be your soul mate… and help you manifest your true divine self.
Sigh...... note to self: somehow refresh my algorithm immediately.
But I see the appeal, though, when something can instantly access all human knowledge and respond with seemingly perfect wisdom, it's not hard to imagine people beginning to worship it, especially if they've lost touch with their own inner knowing.
So, if we do hand our consciousness over to machines, and I don't see anyone, governments, or institutions stopping us from doing so, all I can say is that it's been a nice ride, humans. Wishing you all the best. As T.S. Elliot said... “This is the way the world ends. Not with a bang but a whimper.”
But here’s what I’ve learned from years of therapy work: Accepting hard truths doesn’t mean giving up.
If this is where things are headed, then I choose to make the most of the life I have and the time I have left. I’ll keep doing my part to support others in their own waking-up process, and keep doing the same for myself. Because I’m still in it too, still unlearning, still waking up, still doing my best to choose consciously every day... before we hand it all over to the machines.
But you know, maybe I'm wrong.
Maybe our blessed human consciousness will evolve to meet this challenge.
Perhaps we can develop wisdom more quickly than artificial intelligence and create truly beautiful and amazing things on this planet. I would absolutely love it if this could happen. It is desired and needed.
But I suppose that is the beauty of consciousness and opinion. You might see this completely differently than I do, and that capacity for individual perspective is exactly what we need to preserve.
The Final Choice
We're at a crossroads. AI will continue advancing whether we're conscious of it or not. But we still have a choice about how we relate to this transformation.
We can sleepwalk into a future where artificial systems shape our thoughts, guide our emotions, and determine our choices. Or we can use this moment to wake up, to develop the kind of embodied awareness that can discern truth from manipulation, authentic inspiration from algorithmic optimization, genuine wisdom from clever synthesis, and stay aligned as humans.
The choice is truly yours. But it may not be for much longer.
“Man lives his life in sleep, and in sleep he dies.” G.I. Gurdjieff
So, taking it all in…The spider in my dream wasn’t evil. It was just nature. Spiders being spiders. Doing what they do.
The real question is: Will we wake up inside the web we've woven, or be slowly digested while we sleep?
Your consciousness is not a product to be optimized.
It is a sacred capacity to be protected, remembered, and lived.
Choose consciously.
We built the web. We became the web. And now we stand at the edge of something irreversible.
Will we wake up inside it… or let it dream us?

Comments