Children and AI: A Therapist’s Guide to Navigating Influence Without Blame.
By Dr. Gregory Lyons, PsyD, LCPC.
8/4/2025.
In today’s connected world, artificial intelligence is not just shaping our future—it’s shaping our children. From smart toys and learning apps to chatbots and personalized content feeds, AI is becoming a central part of how young people play, learn, and socialize. As a therapist, I’ve seen how this evolving digital landscape impacts emotional development, identity formation, and even interpersonal trust. But before we point fingers, it’s important to reframe our perspective: AI is not the villain. It’s the tool.
What truly matters is how that tool is used—and by whom.
Corporate Influence and Emotional Design.
While AI itself is neutral, corporations designing AI tools often aren't. Many companies use AI algorithms to gather detailed behavioral data from children, optimizing content and app features to keep them engaged—sometimes in ways that mirror addictive patterns. Games reward compulsive play, while social media filters promote unattainable beauty ideals. Some apps may even learn a child’s preferences to push specific products or manipulate emotions for engagement.
In such cases, it’s not about a child “spending too much time on their tablet.” It’s about systems engineered to capitalize on attention, identity exploration, and emotional vulnerability—all under the guise of personalization.
Emotional Development in the AI Age.
Children aren’t just learning reading and math from apps—they’re absorbing messages about self-worth, social rules, and who they need to be to receive attention. AI-generated praise, curated feeds, and feedback loops can reinforce shallow forms of validation, which may lead to insecurity, over-comparison, or difficulty forming deep, reciprocal relationships.
Parents often ask, “Is this ruining my child’s development?” The answer isn’t clear-cut. Some AI tools support creativity, autonomy, and learning. But others bypass emotional safety in favor of corporate outcomes. The difference lies in intentionality, boundaries, and conversation.
Guidance Without Guilt.
As caregivers, educators, or clinicians, we must shift from blame to empowerment and awareness. Children don’t need shame about their screen time—they need safe, open dialogue about their experiences. Questions like:
“How do you feel after using that app?”
“Do you think the characters care about you, or are they just trying to keep you playing?”
“What do you think that chatbot knows about you?”
These reflections help build emotional insight and digital literacy—tools that can travel with them into adulthood.
Our Role: Protect, Reflect, Regulate.
We must also advocate for greater regulation and transparency. Corporations should be held accountable when AI systems target children in harmful or manipulative ways. Laws need to evolve with technology, and companies should be transparent about how data is used and what algorithms are doing.
Until then, our best defense is awareness and support: helping children question what they see, trust their instincts, and know that their real-world emotions are more important than an algorithm’s agenda.