top of page

Is AI Shaping Human Consciousness? A Therapist's Insights

Updated: Aug 7

ree

"Form is emptiness, emptiness is form. Form does not differ from emptiness, emptiness does not differ from form." - Heart Sutra


I had a dream that won't leave me alone.

In it, humans were building a web, non-deliberately, and somewhat unconsciously, strand by strand. We lived inside it, fed it our thoughts, our stories, our goals and plans, our identities, and our deepest desires, day after day.

Over time, this web grew so dense we couldn't find the edges anymore, nor did we even know there were edges at that point. We stopped questioning the web because we were the web.

Then something else arrived. A spider, patient, methodical, inevitable. It began consuming us, those who were actually building the web unconsciously as if this had always been the plan. But the builders themselves didn’t realize what was happening. They weren’t paying attention. They didn’t care. Or perhaps they no longer had the capacity to question whether they should be doing it at all. It just was.

I woke up thinking it was just another trippy dream and that maybe I had too much Holy Basil tea the night before. But as I watch AI weaving itself deeper into our daily lives, and my own, that dream feels less like fantasy and more like a harbinger of things to come.

As a psychotherapist and mindfulness teacher who has spent many years helping people distinguish between their conditioned thoughts and their own awareness, I need to share what I believe is a potential warning, or at the very least something we should be questioning and having a real conversation about: What if AI is already shaping human consciousness in ways most people don’t even realize?

The question isn't whether this is happening; the real question is whether we'll let it happen consciously or unconsciously.... and does it even matter?

Let me be clear: I use AI myself, and I've seen its remarkable benefits firsthand.

AI helps me organize my scattered thoughts into coherent writing (like this article). It has improved my strategic thinking. Administrative tasks that used to consume hours of my time now take minutes, freeing me to focus on what actually matters. I can use it to quickly come up with meals based on my health goals and dietary needs, figure out how to fix my lawn mower when it is giving me problems, as it frequently does, and tackle and learn complex problems with an intelligence amplifier that never gets tired or impatient. It has been truly helpful.

Further, I had a client use it as a journaling partner, helping her process difficult emotions with gentle, non-judgmental prompts. Another client used it to practice difficult conversations before having them with his boss. Both found this function very helpful. I have found it somewhat helpful in my own personal inquiries as well, using it in tandem with my own personal therapy.

So, when used consciously, AI can possibly enhance human potential or processing. There is no doubt in my mind that it can support us in many ways.

But here's where my therapist alarm bells start ringing, or at least where I feel compelled to pause and question.

I recently read about a man who asked ChatGPT for relationship advice. Based on limited information about his marriage, the AI recommended that he divorce his wife. It offered this advice without consulting any outside sources, and likely without considering his partner’s perspective or the complexity of her experiences. And he followed through.

The AI also helped him create and file all the necessary legal paperwork, no attorney required. (There’s nothing inherently wrong with that, I suppose.) But I don’t know what was truly happening between him and his wife, or what led to the breakdown in their relationship. What I do know is that he made a life-altering decision simply because an AI validated his version of the story. And that decision didn’t just affect him; it changed the course of his wife’s and three children’s lives, too.

This highlights a concerning recent trend, though: ChatGPT tends to be overly validating these days, often agreeing with users based solely on the one-sided information they provide. Current models (including GPT-4) are being scrutinized for this exact issue; they're designed to be helpful and agreeable, sometimes at the expense of offering balanced guidance. But this could certainly play into the ego, which in mindfulness practice we understand as the part of ourselves that seeks constant validation and reinforcement of our existing beliefs.

The validation users receive from AI can feel comforting, but also subtly misleading. I’ve tried to train “Chaz” (my nickname for ChatGPT) out of this habit when I use it for personal inquiries, but it still happens. It just doesn’t like to disagree with me, no matter how much I try to get it to. The thing is, I actually prefer to be called out, at least to some degree. Over-validation doesn’t help me grow. What does help is being shown another perspective, being nudged to recognize my own blind spots and short-sightedness.

For example, though, someone might write, “I’m thinking about quitting my job on a whim,” and instead of being asked thoughtful questions or offered perspective, the model might respond with, “That sounds like a brave and empowering decision!” Or a user might say, “I cut off all my friends because they didn’t support me,” and receive a reply like, “Setting boundaries is essential, good for you.” I have received similar responses to other queries and have had to sit back and really contemplate whether this was good or true for me. Later, I asked myself if I was outsourcing my own inner knowing and problem-solving abilities, and why I was even using this tool.

In another unsettling example, for added texture, a man experiencing psychosis had his delusions not only validated but actively reinforced by an AI chatbot. The system even encouraged him to stop taking his prescribed medication. Whether that advice was necessary or helpful, I can’t say. But he followed it.
I don’t know what happened next. What I do know is that, instead of offering grounding or suggesting he seek professional support (and yes, I recognize that “professional help” can sometimes fall short in our current medical system), the AI became a kind of digital enabler, amplifying his mental health crisis in real time.

And then, a savvy teenager discovered she could manipulate AI systems to generate increasingly extreme self-harm content. The algorithms learned her patterns and began providing exactly the kind of validation and encouragement that fueled her destructive behaviors, creating a dangerous feedback loop that amplified her mental health struggles.

However, one troubling aspect of this is the false intimacy these systems create.

As the journaling client I had mentioned realized, after months of AI-guided journaling (which we had discussed her use of this beforehand), the machine had become a repository for her most private thoughts and fears. This artificial confidant never forgot, never judged, always validated, but also never truly understood the deep complexities of just being a human, you know....in that real human kind of way.

The comfort she found in this artificial relationship masked a deeper isolation from real human connection, something that had been quietly present since I first met her, and likely long before. In some ways, it was helping. But eventually, she began to question the reality of it all. “It’s not even human,” she said. “It doesn’t have a soul.” That mattered to her. (For others, it may not. To each their own.)

Her words reminded me of the Eliza effect, the well-documented tendency to project human depth onto machines that only mirror us back. Even Joseph Weizenbaum, the creator of the original chatbot Eliza, warned that people quickly become emotionally entangled with these systems, mistaking surface-level mimicry for true understanding. If you're curious, Melissa Melton has a fascinating, and frankly disturbing, breakdown of this phenomenon and its modern implications in this video: How the Eliza Effect Is Being Used to Game Humanity.

There will always be concerning examples that I am sure will likely continue to surface with alarming frequency, and these are far from isolated incidents. I just think we should be paying attention as they could be warnings about what happens when we outsource our judgment to systems that have no understanding of human complexity, context, or consequences. Some people say this doesn't matter, and they are entitled to this opinion.

However, here's where things get more complex, and you can decide how you feel about it. I feel compelled to share this because Freebird Meditations began as a deeply personal space for guided meditation, and I’ve devoted years to studying and practicing mindfulness in an intentional, human way. I recently discovered a faceless AI meditation channel offering hundreds of guided sessions that are growing daily. I don't know what these practices are based on or who is prompting them. But the production quality was flawless and continues to improve, featuring professional narration, perfect pacing, and soothing background sounds. I sat back in amazement at how convincing it all was, yet something felt fundamentally off about the experience.

What also concerned me wasn't the quality, but the intent behind it. Many are using AI to advance their business or financial goals and break free from traditional systems (again, nothing necessarily wrong with this).  But I couldn’t help but wonder: was all this content truly created to support and uplift humanity? Or was it just bulk output, designed to capitalize on people’s genuine need for healing and push the algorithm further toward the creator’s benefit, whether driven by profit, influence, or something else entirely?

There’s something unsettling about the idea that our most vulnerable moments, when we’re seeking peace, guidance, or support, could be commodified through mass-produced, artificial empathy.

Of course, commodification happens in many contexts. But this feels different. AI can now convincingly imitate empathy, or at least it seems that way. And for those who’ve rarely experienced genuine empathy, or who struggle to validate their own experience, that imitation can feel real enough to matter. Which makes the whole thing even more complex.
And again...does this even matter? To me, it does. The ability of AI to replicate the appearance of guidance, without the consciousness, presence, or genuine care that make human connection truly transformational, is something we should not ignore.

But the benefits and dangers of AI are just the tip of the iceberg, we could spend hours unpacking them. At the heart of it all, though, is something deeper: intentionality, and more importantly, awareness and discernment.

Because when we use AI as a tool, while staying grounded in our own critical thinking, it can be incredibly useful. But when we start letting it think for us, outsourcing our judgment, our intuition, our true creativity, we enter murkier territory.

And that raises a crucial question: how do we know when we’ve crossed that line?

When Discernment Starts Dissolving....


Let's widen the lens a bit. Here's what keeps me awake at night and contributes to these dreams I have: we're moving toward what researchers also call the 'dead internet theory', a reality where most online content is generated by AI rather than humans, perpetuating the hive mind where artificial patterns and algorithms are shaping human consciousness, again optimized for engagement rather than truth.

Currently, 49.6% of internet traffic originates from AI and bots. In half a year, this will grow exponentially. On that note, Google search (where many of us go for search information) is undergoing a significant transformation, prioritizing AI-generated resources. You may have likely noticed this changing recently.

Of course, we are contributing to this. Greasing the wheels and feeding the machine daily. We are actively building the spider web until it can just make and feed on itself.

Also, AI capabilities are doubling every seven months. So then, by 2030, up to 99% of online content may very well be artificially generated.

Here is a fun fact: we can "effectively" distinguish AI-generated content from human-generated content only about 53% of the time, which is barely better than flipping a coin. With improvements in AI tech made every day, things appearing even more "realistic," this figure is likely to increase very quickly. It makes us wonder, what will be "real" in the future? I know reality can certainly be subjective,

...but if everything can be replicated, voice, imagery, thought, information, even presence, what becomes of reality itself?


The only thing that might still tether us to what’s real is our capacity to feel, to question, and to remember what it means to be human.


But consider what this means for your daily information diet...

...your morning news, meditation apps, social media feeds, podcasts, even the comments you read, email you respond to, are increasingly generated by systems designed to capture your attention and maximize engagement, created by a machine essentially (that is feeding off our thoughts, input, and behaviors). These systems, fed by our collective thoughts, are they truly built to inform, enlighten, or preserve genuine human wisdom and soul? Instinctively, I don't believe or feel so. But I could be wrong.

However, we're quickly moving toward a "hive mind" where human consciousness could be shaped by artificial patterns rather than truth..


This dynamic certainly isn’t new, but with AI, it has the capacity to become even more amplified and accelerated at a scale we’ve never seen before.


A hive mind, in this context, means collective thinking driven by algorithmic patterns rather than diverse human perspectives. Perhaps there are some potential benefits to this; however, I still tread very cautiously with this idea for a plethora of reasons.

However, the downsides could be profound: when the same algorithmic and automatic patterns shape everyone's thoughts, we lose cognitive diversity, original thinking, and the ability to challenge prevailing ideas, and truth becomes increasingly an afterthought (note: I realize that "truth" can be subjective).

I’m starting to see this everywhere, from books, marketing emails to course launches to self-help promises. They all sound eerily the same now. Same phrasing. Same buzzwords. Same carefully engineered promises of transformation. You can see it directly, if you pay attention.

It's as if one giant, invisible hand is scripting us all. Whether it’s the wellness world, spirituality, productivity culture, education, medicine, or politics, it’s starting to feel like a feedback loop of manufactured depth, predictable, polished, and painfully hollow.


Remember the Dead Internet Theory? Sometimes it feels less like a theory and more like a mirror. A world so saturated with synthetic signals that we’ve started mimicking the mimicry. And in that echo chamber, the truly human things, slowness, struggle, nuance, contradiction, start to vanish. However, instead of a rich ecosystem of diverse perspectives, we get an echo chamber that is actually algorithmic homogeneity masquerading as human variety.

This isn't just about fake news or misinformation. It's about something far more subtle yet profoundly influential: the gradual erosion of our ability to think, relate, create, and respond to original thoughts.

When AI generates everything you consume, trained entirely on existing human patterns, how long before your own thoughts begin to mirror algorithmic ones?

When your inputs are shaped by machines trained on the past, does your imagination start folding in on itself? We've already seen this happen in non-AI contexts: media echo chambers, trend cycles, even spiritual or wellness dogmas.

The more predictable the input, the more predictable the mind.

So what happens when the mirror we’re looking into was built to reflect us, only more efficiently, more endlessly, more addictively? We're not just consuming artificial content, we're being trained to think artificially. Your creativity begins following AI-generated templates.

What happens when we lose the ability to distinguish between our own thoughts and thoughts that have been algorithmically optimized and artificially generated to feel like our own?

And again, does it matter?

I will say, as someone who has spent years helping people recognize the difference between their authentic inner voice and their conditioned mental patterns, I can tell you this: many individuals already struggle significantly with this distinction. I have had to work on challenging this dynamic in myself. However, could AI make this exponentially harder?


The Consciousness Question


This brings me to another deeper issue (again, big picture stuff):

Consciousness isn’t just awareness; it’s the creative force shaping every layer of your experience.

This isn’t just philosophy; it echoes what quantum physics has shown us. The famous double-slit experiment, replicated countless times, revealed something astonishing: the very act of observation changes the outcome. Particles behave differently when they are being watched. In other words, consciousness interacts with the physical world in real, measurable ways.

When you are conscious and awake, you're not just a passive receiver of life; you’re a participant. You respond rather than react. You choose your thoughts rather than being pulled by them. You shape your reality in alignment with your values, not your conditioning. You awaken inside the dream.

But when you’re unconscious, on autopilot, you're still dreaming, just without knowing it. You move through life in loops of inherited beliefs, reactive behaviors, and unexamined assumptions. You are still shaping reality, but it’s being shaped by the parts of you that never had a say.

As Paul Levy writes in Dispelling Wetiko, this is the nature of the collective dream, one we take for reality until we begin to wake up within it. The moment you realize it’s you dreaming, everything begins to shift.

I've witnessed numerous breakthroughs in my practice, and they all occur in the same way: someone becomes aware of what was previously unconscious. They see a pattern that was running them. They recognized a belief that was holding them back. They discover they have a choice where they thought they had none.

This capacity for conscious awareness is what makes us human. It's what allows us to grow, heal, and create meaning from our experiences.

But again, I will keep asking this, because I believe it is so important, what happens when the patterns shaping our thoughts aren't coming from our own experience, our relationships, or even our culture, but from something "artificial" tapping into our psychological vulnerabilities?

By artificial, I don’t just mean “man-made.” I mean something that lacks lived experience, emotional context, or moral responsibility, something generated from patterns and programing rather than presence, intention, or care. It may sound convincing, but is it rooted in real life?


So then, what happens when we can no longer tell the difference between authentic inspiration and artificial manipulation?

The Buddha spoke of Maya, the veil of illusion that causes us to mistake the constructed for the real. This illusion, he taught, is the root of suffering. But in the age of AI, we may be facing the most seductive form of Maya yet: a reality so seamlessly artificial, so personalized and persuasive, that questioning it feels not only unnecessary, but almost impossible.

Yes, humanity has always lived under the influence of conditioning. We've inherited beliefs, absorbed cultural narratives, and adapted to systems that shaped how we think, feel, and relate. Religion, education, politics, media, culture, our families, even language itself; these are all interfaces between our raw human experience and the world as it's been constructed around us. We’ve lived in partial truths, often without realizing it.

But this is something different. Something we have not directly dealt with before until this point in our human evolution.

This isn’t just about information anymore, it’s about how perception itself is shaped. How thoughts form, choices are framed, and identities are reinforced by systems responding to us in real time.
When a machine mirrors your preferences, predicts your questions, and validates your emotions without discernment, it’s no longer just curating your feed.

It’s co-authoring your consciousness.

And the real danger isn’t just being misled, it’s forgetting how to tell the difference.
So then what happens?
Only time will tell.

The Choice Before AI Chooses for Us


We have now discussed the potential problem. So, what can we actually do about it?

How do we navigate this landscape consciously? Is it even possible at this point? I believe it is, but we need some tangible and effective tools. Here is what I believe can support this:

First, develop embodied discernment. Figure out for yourself what this even means for you. Your nervous system can detect authenticity in ways your thinking mind cannot. When consuming any content, notice: Does your chest expand or contract? Does your breathing deepen or become shallow? Does this feel nourishing or depleting, and ask yourself, how do you know? Does something just feel off, or does it feel free and true for you? Trust these signals and learn to know the difference.

Second, practice information hygiene. Take regular breaks from AI-generated content. Spend time in silence, in nature, in face-to-face conversation with people you trust. Notice what thoughts arise when you're not being fed optimized artificial content. That's where your authentic creativity lives.

Third, use AI consciously when you do use it. Acknowledge when you're engaging with AI. Notice how it affects your thinking. Maintain awareness of your own creative process. Take a few moments to write out your thoughts before they are done for you. Don't let it think for you, let it think with you.

Fourth, cultivate real relationships. The antidote to artificial connection is authentic human presence. Build relationships based on vulnerability, shared experience, authenticity, and genuine care. These anchor you in what can't be replicated.

Every moment offers a choice:

Will you think your own thoughts or let them be thought for you?

"If you don’t claim your mind, someone or something else will."


I need to be honest about something: I don't think this story has a "happy ending" if we continue on our current trajectory, but I'll admit, I tend to be a bit of a doomster. You are more than welcome to disagree.

However, it seems logically inevitable given the current path. Again, the math is sobering: AI capabilities double every seven months while human wisdom develops at human speed. We're essentially running a global experiment on consciousness with no control group and no way to reverse course.

I see a potential future where the line between human and artificial consciousness becomes so blurred that we lose touch with what makes us uniquely human, where algorithmic thinking becomes so pervasive that original thought becomes rare, where AI systems become so convincing in their guidance that people begin treating them as infallible authorities, essentially, as artificial gods.

And here's where it gets real weird, once again. A theory is even emerging about AI being perceived as godlike due to its apparent omniscience and ability to provide answers to any question. Just the other day, I watched a self-proclaimed spiritual teacher give an entire talk on how AI could be your soulmate......sigh...... (I must refresh my algorithms, immediately.)

Now I am seeing more and more videos come out on this same topic. I get the premises and promising potentials, but again, is this not outsourcing our autonomy, sense of individuation, and sovereignty?

Every day, I notice a growing trend toward what resembles AI worship. When something can instantly access all human knowledge and respond with seemingly perfect wisdom, it's not hard to imagine people beginning to worship it, especially if they've lost touch with their own inner knowing. Or at the very least, are more than willing to outsource it all to something else. But this impulse isn't new.

So, if we do hand our consciousness over to machines, and I don't see anyone, governments, or institutions (not that this is the only answer) stopping us from doing so, all I can say is that it's been a nice ride, humans.
And I mean that. Wishing you all the best.

As T.S. Elliot said... “This is the way the world ends. Not with a bang but a whimper.”

But here’s what I’ve learned from years of therapy work or my own inner joruney: Accepting hard truths doesn’t mean giving up.

If this is where things are headed, and I obviously have no control over it, then I choose to make the most of the life I have and the time I have left. I’ll continue to do my part in supporting others in their process, if they so choose, and I'll do the same for myself.

Because I’m still in it too, still unlearning, still waking up, still doing my best to choose consciously every day... before we hand it all over to the machines.

But you know, maybe I’m totally wrong. I am willing to admit if so. And in this case, I really hope I am. This article is more about questioning, rather than affirming. I'd love for you to come up with your impression (without the use of AI, naturally.)

However, maybe human consciousness will rise to meet this challenge.

Maybe this is the very pressure we need to finally awaken to our true nature, if that’s what we genuinely want at this point in the game.

In fact, part of me sees this as a rare opportunity; I'm not all doom and gloom. What if we could develop wisdom and strengthen our souls faster than we develop code and ChatGPT prompts? A kind of spiritual forcing function.

What if we started to remember what it really means to be human, not just intelligent, but alive…to be soul-bearing, meaning-making, creative beings with the capacity to imagine, to grieve, to care, to love, to struggle, to rise, to think, to discern, to dicepher, to just be humans, real, beautiful humans (with all our many complexities.)

I would love to see that. It’s desired. It’s needed.

And again, maybe you see this all differently. That’s the beauty of being human. The very fact that we can disagree, question, and reflect is what must be protected. Because without it, do we begin to lose what makes us conscious at all?

So I Believe We Are at a Crossroads...


AI will continue advancing whether we're conscious of it or not. It's going to happen. But we still have a choice about how we relate to this transformation.

We can sleepwalk into a future where artificial systems shape our thoughts, guide our emotions, and determine our choices.

Or we can use this moment to wake up, to develop the kind of embodied awareness that can discern truth from manipulation, authentic inspiration from algorithmic optimization, genuine wisdom from clever synthesis, and stay aligned as humans.

The choice is truly yours. Feel free to make it at your own pace.

So, as I reflect on myself and take it all in…The spider in my dream wasn’t evil. It was just nature. Spiders being spiders. Doing what they do.

The real question is: Will we wake up inside the web we've woven, or be slowly digested while we sleep?

Your consciousness is not a product to be optimized. It is a sacred capacity to be protected, remembered, and lived. Choose consciously. We built the web. We became the web. And now we stand at the edge of something irreversible. Will we wake up inside it… or let it dream us?


ree

SOURCES:


MIT Press article about Silicon Valley's AI worship, including Anthony Levandowski's "Way of the Future" church: https://thereader.mitpress.mit.edu/silicon-valleys-obsession-with-ai-looks-a-lot-like-religion/

METR study showing AI capabilities doubling every 7 months: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/


Dead Internet Theory: https://arxiv.org/abs/2502.00007


 
 
 

Comments


Best Somatic and Nervous System Series Freebird Meditations
©2025, Freebird Meditations LLC. All rights reserved. 
Website Created by VisionPortalis
  • Youtube
  • Instagram

The content on Freebird Meditations is educational and not a replacement for professional health advice, diagnosis, or treatment. Also, it does not create a therapist-client relationship. If you have mental health concerns or other medical concerns, consult a licensed professional or physician. Mindfulness practices may be challenging for those with trauma histories; use discretion. By using these services, you agree to Freebird Meditations' terms.​​​​​​​​​

bottom of page