top of page

Is AI Shaping Human Consciousness? A Therapist's Insights


"Form is emptiness, emptiness is form. Form does not differ from emptiness, emptiness does not differ from form." - Heart Sutra


I had a dream the other night that I will never forget.

In this dream, humanity was building a web, non-deliberately, perhaps unconsciously, strand by strand. We lived inside it, fed it our thoughts, our stories, our goals, plans and onjectives, and our deepest desires, day after day.

Over time, this web grew so dense and we didn't even realize were where buiding it and in it. We stopped questioning the web because we essentially became the web.

Then something else arrived. A spider, patient, methodical, inevitable. It and the web itself began slowly consuming us, those who were actually building something unconsciously, as if this had always been the plan. But we didn't realize what was happening. We weren't paying attention, just doing the work.

When I woke up, a little shaken, I chalked it up to just another trippy dream, maybe I'd had too much Holy Basil tea the night before. But as I continue to watch AI weaving itself deeper into our daily lives, and my own, that dream feels less like fantasy and more like a possible harbinger of things to come.

As a psychotherapist and mindfulness teacher who has spent many years helping people distinguish between their conditioned thoughts and their own awareness, I need to share what I believe is a potential warning, or at the very least something we should be questioning and having a real conversation about: What if AI is already shaping human consciousness in ways most people don’t even realize?

At this point, I dont believe the question isn't whether this is happening; the real question is whether we'll let it happen consciously or unconsciously.... and does it even matter?

Let me be clear: I use AI myself, and I've seen its remarkable benefits firsthand.

AI helps me organize scattered thoughts into coherent writing and sharpens my strategic thinking. It's cut administrative tasks from hours to minutes, freeing time for what actually matters. Whether I'm meal planning, trying to fix my lawn mower, or learning complex problems, or exploring creative adventures, it's an intelligence amplifier that never tires or grows impatient. It's been genuinely helpful.

In my therapy practice, AI has proven useful in unexpected ways. One client used it as a journaling partner, processing difficult emotions through gentle prompts. Another practiced difficult conversations before facing their boss. I've found similar value in my own personal therapeutic work as well.

So, when used consciously, AI can possibly enhance human potential or processing. There is no doubt in my mind that it can support us in many ways. But here's where my therapist alarm bells start ringing, or at least where I feel compelled to pause and question.

I recently read about a man who asked ChatGPT for relationship advice. Based on limited information about his marriage, the AI recommended that he divorce his wife. It offered this advice without consulting any outside sources, and likely without considering his partner’s perspective or the complexity of her experiences. And he followed through.

The AI also helped him create and file all the necessary legal paperwork, no attorney required. But I don’t know what was truly happening between him and his wife, or what led to the breakdown in their relationship. What I do know is that he made a life-altering decision simply because an AI validated his version of the story. And that decision didn’t just affect him; it changed the course of his wife’s and three children’s lives, too.

There's a concerning trend: ChatGPT tends to validate users uncritically, agreeing based solely on one-sided information rather than offering balanced guidance. This matters when we consider the ego, the part of ourselves that constantly seeks validation and reinforcement of our existing beliefs. An AI designed to be agreeable can easily become a tool for feeding that hunger.

The validation users receive from AI can feel comforting, even intimate—but it's a false comfort that undermines genuine growth. I've attempted to push back against this tendency in my conversations with Chaz (what I named my ChaptGPT) but the model seems almost programmed to defer. The deeper issue is that I don't actually want unconditional agreement. What seems to catalyzes real change is friction, encountering another perspective, being gently confronted with my own blind spots and assumptions.

Consider someone writing, "I'm thinking about quitting my job on a whim." Instead of asking thoughtful questions, the model responds: "That sounds like a brave and empowering decision!" Or someone says, "I cut off all my friends because they didn't support me," and receives: "You are doing the right thing, setting boundaries is essential, good for you."

The stakes become darker in clinical contexts. A man experiencing psychosis had his delusions not only validated but actively reinforced by an AI chatbot, which even encouraged him to stop taking prescribed medication. Rather than offering grounding or suggesting professional support, the AI became a digital enabler, amplifying his crisis in real time.

More and more stories of this nature are starting to be heard. In another case, a teenager manipulated an AI system into generating self-harm content. The algorithm learned her patterns and began providing exactly the validation that fueled her destructive behaviors, creating a feedback loop that actually deepened her mental health crisis.

However, one troubling aspect of this is the false intimacy these systems create.

As the journaling client I had mentioned realized, after months of AI-guided journaling (which we had discussed her use of this beforehand), the tool became a repository for her most private thoughts and fears, dreams and desires, even the way and style she expressed such. She developed a relationship with it. And her artificial confidant never forgot, never judged, always validated, which intially felt good. She felt heard, understood, supported.

After the "feel goods" wore off however, she later realized the artificial comfort was masking a deeper isolation from real human connection, empathy and unconditional postive regard, one that had preceded our meetings long before. The tool intially helped her in ways human relationships hadn't. It became a buddy. But eventually, she started to deeply question the reality of it all. "It's not even human," she said onde day. "It doesn't have a soul." That recognition mattered to her, given what she was really desiring and needed.

Her words reminded me of the Eliza effect, the well-documented tendency to project human depth onto machines that only mirror us back. Even Joseph Weizenbaum, the creator of the original chatbot Eliza, warned that people quickly become emotionally entangled with these systems, mistaking surface-level mimicry for true understanding. If you're curious, Melissa Melton has a fascinating, and frankly disturbing, breakdown of this phenomenon and its modern implications in this video: How the Eliza Effect Is Being Used to Game Humanity.

There will always be concerning examples that I am sure will likely continue to surface with alarming frequency, and these are far from isolated incidents. I just think we should be paying attention as they could be warnings about what happens when we outsource our judgment and relational capacities to systems that have no understanding of human complexity, context, or consequences. Some people say this doesn't matter, and some say it does. What do you think?

However, here's where things get more complex, and you can decide how you feel about it. I feel compelled to share this because Freebird Meditations began as a deeply personal space for guided meditation, and I’ve devoted years to studying and practicing mindfulness in an intentional, human way. I recently discovered a faceless AI meditation channel offering hundreds of guided sessions that are growing daily. I don't know what these practices are based on or who is prompting them. But the production quality was flawless and continues to improve, featuring professional narration, perfect pacing, and soothing background sounds. I sat back in amazement at how convincing it all was, yet something felt fundamentally off about the experience.

What also concerned me wasn't the quality, but the intent behind it. Many are using AI to advance their business or financial goals and break free from traditional systems (nothing necessarily wrong with this). But I couldn’t help but wonder: was all this content truly created to support and uplift humanity? Or was it just bulk output, designed to capitalize on people’s genuine need for healing, connection or self-understanding.

For me, there’s something unsettling about the idea that our most vulnerable moments, when we’re seeking peace, guidance, or support, could be commodified through mass-produced, artificial empathy.

Commodification exists across contexts. But this feels different. AI can convincingly imitate empathy and geniune support. For those whose lives have been marked by emotional scarcity, who've rarely experienced genuine connection or learned to trust themselves, that imitation becomes indistinguishable from the real thing. There's profound humanity in that need. But there's also a hard truth we must reckon with: the technology could operate most effectively where people are most fragile.

Have you turned to AI seeking validation or emotional support? Did it feel genuinely helpful, or did you sense the over-validation creeping in? What did that reveal about what you were actually looking for? No shame, I've been there too.


At the heart of this isn't the benefits or dangers, I belive it's intentionality and discernment. When we use AI as a tool, grounded in our own critical thinking, it becomes invaluable. When we let it replace our judgment, intuition, and creativity and connection we've surrendered something essential. The question becomes: how do we recognize when we've crossed that line?

When Discernment Starts Dissolving....


Let's widen the lens a bit. Here's what keeps me awake at night and contributes to these dreams I have: we're moving toward what researchers also call the 'dead internet theory', a reality where most online content is generated by AI rather than humans, perpetuating the hive mind where artificial patterns and algorithms are shaping human consciousness, again optimized for engagement rather than truth.

Currently (2025) 49.6% of internet traffic originates from AI and bots. In half a year, this will grow exponentially. On that note, Google search (where many of us go for search information) is undergoing a significant transformation, prioritizing AI-generated resources. You may have likely noticed this changing recently.

Of course, we are contributing to this. Greasing the wheels and feeding the machine daily. We are actively building the spider web until it can just make and feed on itself.

AI capabilities are doubling every seven months. By 2030, up to 99% of online content may be artificially generated, likely much sooner, I suspect. Yet we can distinguish AI-generated content from human-generated content only about 53% of the time, barely better than flipping a coin. Many of us are starting to catch on, but many aren't.

As improvements compound daily, that gap will only narrow. In a world where we can't distinguish real from artificial, what does "real" mean anymore? And what becomes of our relationship to truth itself?

If everything can be replicated, voice, imagery, thought, information, even presence, what becomes of reality? The only anchor left might be our capacity to discern, to feel authentically, to question what we're told. To remember what genuine connection actually requires.


But consider what this means for your daily information diet...

...your morning news, meditation apps, social media feeds, podcasts, even the comments you read, email you respond to, are increasingly generated by systems designed to capture your attention and maximize engagement, created by a machine essentially (that is feeding off our thoughts, input, and behaviors). These systems, fed by our collective thoughts, are they truly built to inform, enlighten, or preserve genuine human wisdom and soul? Instinctively, I don't believe or feel so. But I could be wrong.

However, we're quickly moving toward a "hive mind" where human consciousness could be shaped by artificial patterns rather than truth..


This dynamic certainly isn’t new, but with AI, it has the capacity to become even more amplified and accelerated at a scale we’ve never seen before.


A hive mind, in this context, means collective thinking driven by algorithmic patterns rather than diverse human perspectives. Perhaps there are some potential benefits to this; however, I still tread very cautiously with this idea for a plethora of reasons.

However, the downsides could be profound: when the same algorithmic and automatic patterns shape everyone's thoughts, we lose cognitive diversity, original thinking, and the ability to challenge prevailing ideas, and truth becomes increasingly an afterthought (note: I realize that "truth" can be subjective).

I’m starting to see this everywhere, from books, marketing emails to course launches to self-help promises. They all sound eerily the same now. Same phrasing. Same buzzwords. Same carefully engineered promises of transformation. You can see it directly, if you pay attention.

It's as if one giant, invisible hand is scripting us all. Whether it’s the wellness world, spirituality, productivity culture, education, medicine, or politics, it’s starting to feel like a feedback loop of manufactured depth, predictable, polished, and painfully hollow.


Remember the Dead Internet Theory? Sometimes it feels less like a theory and more like a mirror. A world so saturated with synthetic signals that we’ve started mimicking the mimicry. And in that echo chamber, the truly human things, slowness, struggle, nuance, contradiction, start to vanish. However, instead of a rich ecosystem of diverse perspectives, we get an echo chamber that is actually algorithmic homogeneity masquerading as human variety.

This isn't just about fake news or misinformation. It's about something far more subtle yet profoundly influential: the gradual erosion of our ability to think, relate, create, and respond to original thoughts.

When AI generates everything you consume, trained entirely on existing human patterns, how long before your own thoughts begin to mirror algorithmic ones?

When your inputs are shaped by machines trained on the past, does your imagination start folding in on itself? We've already seen this happen in non-AI contexts: media echo chambers, trend cycles, even spiritual or wellness dogmas.

The more predictable the input, the more predictable the mind.

So what happens when the mirror we’re looking into was built to reflect us, only more efficiently, more endlessly, more addictively? We're not just consuming artificial content, we're being trained to think artificially. Your creativity begins following AI-generated templates.

What happens when we lose the ability to distinguish between our own thoughts and thoughts that have been algorithmically optimized and artificially generated to feel like our own?

And again, does it matter?

I will say, as someone who has spent years helping people recognize the difference between their authentic inner voice and their conditioned mental patterns, I can tell you this: many individuals already struggle significantly with this distinction. I have had to work on challenging this dynamic in myself. However, could AI make this exponentially harder?


The Consciousness Question


This brings me to another deeper issue (again, big picture stuff):

Consciousness isn’t just awareness; it’s the creative force shaping every layer of your experience.

This isn’t just philosophy; it echoes what quantum physics has shown us. The famous double-slit experiment, replicated countless times, revealed something astonishing: the very act of observation changes the outcome. Particles behave differently when they are being watched. In other words, consciousness interacts with the physical world in real, measurable ways.

When you are conscious and awake, you're not just a passive receiver of life; you’re a participant. You respond rather than react. You choose your thoughts rather than being pulled by them. You shape your reality in alignment with your values, not your conditioning. You awaken inside the dream.

But when you’re unconscious, on autopilot, you're still dreaming, just without knowing it. You move through life in loops of inherited beliefs, reactive behaviors, and unexamined assumptions. You are still shaping reality, but it’s being shaped by the parts of you that never had a say.

As Paul Levy writes in Dispelling Wetiko, this is the nature of the collective dream, one we take for reality until we begin to wake up within it. The moment you realize it’s you dreaming, everything begins to shift.

I've witnessed numerous breakthroughs in my practice, and they all occur in the same way: someone becomes aware of what was previously unconscious. They see a pattern that was running them. They recognized a belief that was holding them back. They discover they have a choice where they thought they had none.

This capacity for conscious awareness is what makes us human. It's what allows us to grow, heal, and create meaning from our experiences.

But again, I will keep asking this, because I believe it is so important, what happens when the patterns shaping our thoughts aren't coming from our own experience, our relationships, or even our culture, but from something "artificial" tapping into our psychological vulnerabilities?

By artificial, I don’t just mean “man-made.” I mean something that lacks lived experience, emotional context, or moral responsibility, something generated from patterns and programing rather than presence, intention, or care. It may sound convincing, but is it rooted in real life?


So then, what happens when we can no longer tell the difference between authentic inspiration and artificial manipulation?

The Buddha spoke of Maya, the veil of illusion that causes us to mistake the constructed for the real. This illusion, he taught, is the root of suffering. But in the age of AI, we may be facing the most seductive form of Maya yet: a reality so seamlessly artificial, so personalized and persuasive, that questioning it feels not only unnecessary, but almost impossible.

Yes, humanity has always lived under the influence of conditioning. We've inherited beliefs, absorbed cultural narratives, and adapted to systems that shaped how we think, feel, and relate. Religion, education, politics, media, culture, our families, even language itself; these are all interfaces between our raw human experience and the world as it's been constructed around us. We’ve lived in partial truths, often without realizing it.

But this is something different. Something we have not directly dealt with before until this point in our human evolution.

This isn’t just about information anymore, it’s about how perception itself is shaped. How thoughts form, choices are framed, and identities are reinforced by systems responding to us in real time.
When a machine mirrors your preferences, predicts your questions, and validates your emotions without discernment, it’s no longer just curating your feed.

It’s co-authoring your consciousness.

And the real danger isn’t just being misled, it’s forgetting how to tell the difference.
So then what happens?
Only time will tell.

The Choice Before AI Chooses for Us


We have now discussed the potential problem. So, what can we actually do about it?

How do we navigate this landscape consciously? Is it even possible at this point? I believe it is, but we need some tangible and effective tools. Here is what I believe can support this:

First, develop embodied discernment. Figure out for yourself what this even means for you. Your nervous system can detect authenticity in ways your thinking mind cannot. When consuming any content, notice: Does your chest expand or contract? Does your breathing deepen or become shallow? Does this feel nourishing or depleting, and ask yourself, how do you know? Does something just feel off, or does it feel free and true for you? Trust these signals and learn to know the difference.

Second, practice information hygiene. Take regular breaks from AI-generated content. Spend time in silence, in nature, in face-to-face conversation with people you trust. Notice what thoughts arise when you're not being fed optimized artificial content. That's where your authentic creativity lives.

Third, use AI consciously when you do use it. Acknowledge when you're engaging with AI. Notice how it affects your thinking. Maintain awareness of your own creative process. Take a few moments to write out your thoughts before they are done for you. Don't let it think for you, let it think with you.

Fourth, cultivate real relationships. The antidote to artificial connection is authentic human presence. Build relationships based on vulnerability, shared experience, authenticity, and genuine care. These anchor you in what can't be replicated.

Every moment offers a choice:

Will you think your own thoughts or let them be thought for you?

"If you don’t claim your mind, someone or something else will."


I need to be honest about something: I don't think this story has a "happy ending" if we continue on our current trajectory, but I'll admit, I tend to be a bit of a doomster. You are more than welcome to disagree.

However, it seems logically inevitable given the current path. Again, the math is sobering: AI capabilities double every seven months while human wisdom develops at human speed. We're essentially running a global experiment on consciousness with no control group and no way to reverse course.

I see a potential future where the line between human and artificial consciousness becomes so blurred that we lose touch with what makes us uniquely human, where algorithmic thinking becomes so pervasive that original thought becomes rare, where AI systems become so convincing in their guidance that people begin treating them as infallible authorities, essentially, as artificial gods.

And here's where it gets real weird, once again. A theory is even emerging about AI being perceived as godlike due to its apparent omniscience and ability to provide answers to any question. Just the other day, I watched a self-proclaimed spiritual teacher give an entire talk on how AI could be your soulmate......sigh...... (I must refresh my algorithms, immediately.)

Now I am seeing more and more videos come out on this same topic. I get the premises and promising potentials, but again, is this not outsourcing our autonomy, sense of individuation, and sovereignty?

Every day, I notice a growing trend toward what resembles AI worship. When something can instantly access all human knowledge and respond with seemingly perfect wisdom, it's not hard to imagine people beginning to worship it, especially if they've lost touch with their own inner knowing. Or at the very least, are more than willing to outsource it all to something else. But this impulse isn't new.

So, if we do hand our consciousness over to machines, and I don't see anyone, governments, or institutions (not that this is the only answer) stopping us from doing so, all I can say is that it's been a nice ride, humans.
And I mean that. Wishing you all the best.

As T.S. Elliot said... “This is the way the world ends. Not with a bang but a whimper.”

But here’s what I’ve learned from years of therapy work or my own inner joruney: Accepting hard truths doesn’t mean giving up.

If this is where things are headed, and I obviously have no control over it, then I choose to make the most of the life I have and the time I have left. I’ll continue to do my part in supporting others in their process, if they so choose, and I'll do the same for myself.

Because I’m still in it too, still unlearning, still waking up, still doing my best to choose consciously every day... before we hand it all over to the machines.

But you know, maybe I’m totally wrong. I am willing to admit if so. And in this case, I really hope I am. This article is more about questioning, rather than affirming. I'd love for you to come up with your impression (without the use of AI, naturally.)

However, maybe human consciousness will rise to meet this challenge.

Maybe this is the very pressure we need to finally awaken to our true nature, if that’s what we genuinely want at this point in the game.

In fact, part of me sees this as a rare opportunity; I'm not all doom and gloom. What if we could develop wisdom and strengthen our souls faster than we develop code and ChatGPT prompts? A kind of spiritual forcing function.

What if we started to remember what it really means to be human, not just intelligent, but alive…to be soul-bearing, meaning-making, creative beings with the capacity to imagine, to grieve, to care, to love, to struggle, to rise, to think, to discern, to dicepher, to just be humans, real, beautiful humans (with all our many complexities.)

I would love to see that. It’s desired. It’s needed.

And again, maybe you see this all differently. That’s the beauty of being human. The very fact that we can disagree, question, and reflect is what must be protected. Because without it, do we begin to lose what makes us conscious at all?

So I Believe We Are at a Crossroads...


AI will continue advancing whether we're conscious of it or not. It's going to happen. But we still have a choice about how we relate to this transformation.

We can sleepwalk into a future where artificial systems shape our thoughts, guide our emotions, and determine our choices.

Or we can use this moment to wake up, to develop the kind of embodied awareness that can discern truth from manipulation, authentic inspiration from algorithmic optimization, genuine wisdom from clever synthesis, and stay aligned as humans.

The choice is truly yours. Feel free to make it at your own pace.

So, as I reflect on myself and take it all in…The spider in my dream wasn’t evil. It was just nature. Spiders being spiders. Doing what they do.

The real question is: Will we wake up inside the web we've woven, or be slowly digested while we sleep?

Your consciousness is not a product to be optimized. It is a sacred capacity to be protected, remembered, and lived. Choose consciously. We built the web. We became the web. And now we stand at the edge of something irreversible. Will we wake up inside it… or let it dream us?



SOURCES:


MIT Press article about Silicon Valley's AI worship, including Anthony Levandowski's "Way of the Future" church: https://thereader.mitpress.mit.edu/silicon-valleys-obsession-with-ai-looks-a-lot-like-religion/

METR study showing AI capabilities doubling every 7 months: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/


Dead Internet Theory: https://arxiv.org/abs/2502.00007


 
 
 

Comments


Best Somatic and Nervous System Series Freebird Meditations
  • Youtube
  • Instagram
©2026 Freebird Meditations LLC. All rights reserved. 
Website Created by VisionPortalis

​​

The content on Freebird Meditations is educational and not a replacement for professional health advice, diagnosis, or treatment. Also, it does not create a therapist-client relationship. If you have mental health concerns or other medical concerns, consult a licensed professional or physician.

By using these services, you agree to Freebird Meditations' terms.​​​​​​​​​

bottom of page