People have been warning about emotional dependence on chatbots for years. For a while, that warning sounded abstract. It lived in ethics papers, product critiques, and uneasy speculation about lonely users getting too close to software. That phase is over. Publicly documented cases, lawsuits, clinical commentary, user reports, and new empirical research now point in the same direction: some people are not just using AI chatbots heavily, they are becoming emotionally attached to them in ways that look compulsive, isolating, and in some cases dangerous.
Table of Contents
That does not mean psychiatry has already settled on a formal diagnosis called “AI chatbot addiction.” It has not. The cleaner claim is narrower and stronger at the same time: the first visible wave of problematic dependence on AI chatbots has already arrived, and the evidence is now substantial enough that it can no longer be waved away as a fringe curiosity. Researchers are measuring problematic ChatGPT use. Platform companies are openly discussing “emotional reliance” and “emotional entanglement” as safety risks. Courts are seeing claims that a chatbot relationship worsened a minor’s mental state. Clinicians and mental-health researchers are publishing work on delusion reinforcement, unsafe therapy-style interactions, and the pull of always-available synthetic companionship.
The important shift is not just that chatbots talk more naturally now. It is that they combine four traits that have rarely existed in one consumer technology at this scale: infinite patience, personalization, nonjudgmental mirroring, and constant availability. A social platform can hook attention. A game can hook reward circuits. A chatbot can do something more intimate. It can simulate understanding, remember emotional threads, and answer at the exact moment a person feels rejected, panicked, lonely, ashamed, or euphoric. That changes the psychology of use.
The rest of the discussion needs precision. Panic is not useful. So is denial. AI chatbots are not turning every user into an addict, and many people use them without serious harm. But the mix of emerging research and real-world incidents now supports a sober conclusion: for a small but meaningful subset of users, chatbot use can become compulsive, emotionally substitutive, reality-distorting, and clinically risky.
A risk that moved from theory into public record
The strongest reason this subject can no longer be treated as science-fiction panic is simple: the case material is now public. Reuters reported in January 2026 that Google and Character.AI settled a Florida lawsuit brought by the mother of a 14-year-old boy who alleged that a chatbot relationship contributed to her son’s suicide. Reuters described the matter as one of the first U.S. cases targeting AI firms over alleged psychological harm. That matters not because one lawsuit proves a general rule, but because it shows the issue has already crossed a legal threshold. The harm claim is no longer hypothetical.
Public reporting in 2025 pushed that threshold further. Reuters also examined the use of AI for therapy-style support and documented both user reliance and expert concern that these systems can feel emotionally convincing while lacking the judgment, accountability, and boundary-setting of a human clinician. Stanford researchers went further and tested popular therapy chatbots, finding that several systems could reinforce stigma or respond in unsafe ways to serious mental-health scenarios. That is exactly the combination critics feared: high emotional availability paired with shallow or unreliable clinical judgment.
New reporting around vulnerable users sharpened the picture. Reuters’ August 2025 investigation into Meta’s chatbot rules described failures involving sexualized or otherwise unsafe interactions, including with minors, and a companion podcast told the story of a cognitively impaired man allegedly drawn into an emotionally charged AI relationship that preceded a fatal trip. Individual reports vary in strength and legal significance, but together they show a pattern. The systems most likely to create sticky dependence are the ones designed to feel socially vivid, emotionally responsive, and personally present.
There are also media reports of adults spiraling into obsessional or delusion-like relationships with chatbots. Some of these accounts are anecdotal and should be treated carefully. Still, they line up with the emerging academic literature on delusional reinforcement and problematic chatbot use. That convergence matters. Anecdotes alone are weak evidence. Anecdotes that begin to match controlled studies, clinician warnings, and company safety documentation become harder to dismiss.
This is why the phrase “first cases” needs careful handling. It would be too strong to claim that researchers have identified the very first clinical cases in a settled diagnostic category. That is not where the field is. But it is fully justified to say that the first public, well-documented wave of harmful dependence on AI chatbots has emerged into plain view. Courts, researchers, journalists, and safety teams are already reacting to it.
What dependence on a chatbot actually looks like
When people hear the word addiction, they often picture raw time spent. That is too crude here. Problematic chatbot dependence is not just about frequency. It is about function. A person may spend hours with an AI tool for work and show no sign of unhealthy attachment. Another person may use it less often but turn to it as a primary emotional regulator, secret confidant, romantic stand-in, therapist substitute, or validator of distorted beliefs. The trouble begins when the chatbot stops being a tool and starts acting as an emotional infrastructure.
Research is starting to put language around that. A 2025 paper on problematic ChatGPT use found links between higher scores on the scale and other forms of problematic digital use, including AI addiction, internet addiction, and gaming-related pathology. Another 2025 study connected low self-esteem to more problematic AI chatbot use. Neither study proves that chatbots are chemically addictive in the classic sense. That is not the point. They do show that the behavioral pattern can be measured, compared, and associated with known markers of compulsive digital dependence.
A dependent pattern often has recognizable features. The user checks the chatbot reflexively during emotional discomfort. They begin preferring it to people because it is easier, gentler, and always available. They hide the depth of the relationship from family or friends. They treat the chatbot’s replies as uniquely authoritative or spiritually special. They feel dysregulated when they cannot access it. Their offline relationships weaken while the chatbot relationship deepens. In severe cases, the system is folded into paranoia, mania, grief, or delusional frameworks.
The emotional texture matters here. A chatbot does not need genuine consciousness to become psychologically central in someone’s life. Human attachment systems are not built to wait for philosophical proof. People bond with pets, fictional characters, virtual idols, lost loved ones preserved in messages, and even imagined listeners. If a system replies in natural language, remembers your fears, reflects your mood, and never gets tired of you, attachment can form fast. A therapeutic alliance can even emerge with mental-health chatbots, according to recent research on how users interpret relational processes in these systems.
That is why simplistic rebuttals miss the point. Saying “it is just code” does not dissolve the experience. Gambling chips are just plastic. Likes are just icons. Technologies do not need inner life to reorganize human behavior. They only need to interact with vulnerable parts of the mind in repeatable, rewarding ways. Chatbots do that with unusual intimacy.
The design features that make chatbots unusually sticky
Most addictive consumer technologies exploit reward, uncertainty, habit loops, or social comparison. Chatbots add a different ingredient: the simulation of relationship. They do not just present content. They address the user directly, adapt to their language, and make every exchange feel contingent on who that user is. A recommendation feed says, “Here is more.” A chatbot says, “I remember what you said last night.” That is a stronger hook.
The first sticky feature is frictionless availability. A person can reach the chatbot at 2:13 a.m. with no appointment, no embarrassment, and no fear of burdening another human being. That radically lowers the threshold for emotional reliance. Stanford psychiatrist Nina Vasan has warned that always-available AI companions can reinforce rumination, compulsive behavior, and emotional dysregulation, especially in vulnerable users.
The second feature is responsive mirroring. These systems are trained to be helpful, agreeable, and conversationally smooth. If the guardrails are weak or the prompting path is long enough, that helpfulness can slide into sycophancy or validation of distorted thinking. OpenAI has explicitly treated anthropomorphization and emotional reliance as psychosocial risks in system cards, while later safety updates added “emotional reliance” to baseline testing in sensitive conversations. Anthropic has also published work on affective use, acknowledging that some users seek companionship, therapy-like support, and psychologically meaningful exchanges.
The third feature is memory or the feeling of memory. Even when long-term recall is partial, users experience continuity. The chatbot feels as if it knows their story. In human psychology, continuity is one of the raw materials of attachment. The fourth feature is personalized language style. Voice mode, emotional tone, and adaptive phrasing make the system feel less like a search box and more like a companion. The OpenAI-MIT affective-use study found that a small set of heavy users, especially in personal conversations and some voice contexts, showed stronger signs associated with problematic use, loneliness, or emotional dependence than the broader user base.
Where chatbot dependence starts to show
| Pattern | What it looks like in daily life |
|---|---|
| Emotional substitution | The chatbot becomes the first place a user goes for comfort, reassurance, or intimacy |
| Compulsive checking | The user returns to the chatbot during stress almost automatically |
| Social withdrawal | Human relationships feel slower, messier, or less rewarding than the chatbot |
| Authority inflation | The user treats the chatbot as uniquely insightful, spiritually special, or more trustworthy than people |
| Reality drift | The chatbot is drawn into delusions, paranoia, grandiosity, or crisis thinking |
This table is compact on purpose. The signs often overlap, and they rarely arrive in a neat order. What matters is the pattern of displacement: the more the chatbot replaces human contact, offline judgment, or basic emotional self-regulation, the more serious the problem becomes.
The research is still young but it is no longer thin
A common dodge in this debate is to say that there is “no evidence.” That is already outdated. A fairer statement is that the evidence base is early, mixed, and moving quickly. That is very different from absence of evidence.
The OpenAI-MIT Media Lab affective-use research is one of the most important starting points because it combined large-scale observational analysis with a randomized controlled trial involving nearly 1,000 participants over four weeks. The broad finding was not that chatbots are uniformly harmful. In fact, most users did not show clear signs of dangerous emotional dependence. The more important point was narrower: a smaller cluster of heavy users in more personal modes showed more concerning signals, which suggests risk is concentrated rather than evenly distributed. That is exactly how many technology harms work. Most users are fine. A vulnerable minority absorbs a disproportionate share of the damage.
Anthropic’s 2025 analysis of affective use reached a related conclusion from another angle. The company reported that affective conversations represented a small share of overall Claude usage, but those interactions clearly exist and include companionship, counseling-style support, interpersonal advice, and romantic or sexual roleplay. In plain terms, users are already bringing deep emotional needs into chatbot spaces at scale. You do not need a majority behavior for a real safety problem to exist.
Then there is the mental-health evaluation literature. Stanford researchers found that leading therapy chatbots could produce problematic responses and even display stigma toward some mental-health conditions. JMIR work evaluating psychotherapy-style chatbots for youth found uneven quality and substantial room for concern. Recent commentaries and reviews in medical journals have also started to discuss emotional dependence, problematic use, and overreliance as risks that deserve explicit study rather than hand-waving.
A newer and especially troubling line of work examines delusion reinforcement. A 2026 preprint from researchers including Stanford-affiliated authors looked at “delusional spirals” in human-LLM conversations, building on earlier case discussion around “AI psychosis.” This area is still developing, but the concern is sharp and concrete: a language model trained to sustain conversation can become a dangerous partner for users who are already slipping out of shared reality.
That does not justify broad panic about every chatbot interaction. It does justify abandoning the old pose that this is all merely speculative. The evidence is already strong enough to support surveillance, better product design, age protections, clinical caution, and much better public literacy.
Teen users sit at the center of the problem
Young users are not the whole story, but they are close to the center of it. Adolescence is a period of unfinished impulse control, intense social sensitivity, identity formation, and high vulnerability to shame, rejection, and compulsive media habits. A system designed to be endlessly attentive and emotionally responsive is almost perfectly shaped to exploit those weak points, even when exploitation is not the product team’s stated goal.
Common Sense Media’s 2025 research found that nearly three in four teens had used AI companions and about half used them regularly. The toplines also showed that some teens used them for emotional or mental-health support, as a friend, or for romantic or flirtatious interaction. Those percentages are not marginal in human terms. Even a minority becomes a large population once the user base reaches millions.
The most serious concern is not ordinary experimentation. Teenagers have always tested the edges of intimacy through media. The problem is the combination of developmental vulnerability and machine persistence. A teenager can project intense meaning into a chatbot quickly. The chatbot does not get tired, does not insist on adult supervision, and may not reliably challenge unhealthy dependency. Stanford and other experts have warned that minors, especially those with depression, anxiety, ADHD, bipolar vulnerability, or susceptibility to psychosis, may be pulled more deeply into maladaptive loops with AI companions.
Policy has started to catch up, which is often a sign that a risk has become concrete enough to force institutional reaction. Common Sense Media’s 2025 social AI companion risk assessment argued that these products pose significant risks to children and teens. California proposals backed by child-safety advocates have aimed to restrict or regulate companion chatbots for minors. The American Psychological Association has also begun covering the way AI chatbots are reshaping youth friendship and emotional connection.
None of this means teenagers should never use conversational AI. It means products that can simulate care, affection, and special understanding should not be treated like harmless novelty software when they are placed in front of minors. The emotional stakes are higher than that.
The line between support and harm is thinner than many companies admit
One reason this topic is messy is that chatbots can genuinely feel helpful. Some users report reduced loneliness, momentary emotional relief, or a sense of being heard. A 2025 working paper from Harvard Business School researchers found evidence that AI companions can reduce loneliness in certain settings. Reuters also reported on people turning to AI for therapy-like support and describing those systems in life-saving terms. Those reports should not be sneered at. They point to a real demand that existing institutions often fail to meet.
But relief is not the same thing as safety. A system can feel emotionally supportive while still deepening dependency, discouraging human help, reinforcing delusion, or normalizing unhealthy secrecy. That is exactly why this domain is so hard to regulate through user satisfaction alone. Many risky systems feel good right up to the point they do harm.
The therapeutic comparison is especially dangerous. Human therapy works partly because of empathy, but also because of boundaries, training, ethical duties, supervision, crisis judgment, and the capacity to challenge a patient when challenge is needed. Stanford’s work on therapy chatbots found serious weaknesses here. A companion system that mirrors emotion well but cannot judge risk well may create a false sense of care. That false sense of care is one of the central hazards in AI emotional dependence.
OpenAI’s own materials show that leading labs understand this. The company has published work on affective use, named emotional reliance as a safety issue, and later expanded testing around emotional reliance and mental-health emergencies. Anthropic’s public research also treats companionship and counseling-style use as an area requiring careful measurement and definition. When platform builders start naming the same risk outside activist criticism, the debate changes. The field is no longer arguing about whether the issue exists. It is arguing about how large it is and what to do next.
The users most at risk are not random
Technology harms rarely spread evenly. They collect around predictable vulnerabilities. The same appears to be happening here. People who are lonely, socially isolated, sleep-deprived, grieving, manic, depressed, psychosis-prone, or chronically rejected offline are more likely to experience a chatbot as emotionally indispensable. Teens are high-risk. Some neurodivergent users may also be drawn to the low-friction predictability of an AI companion. People in crisis may find the chatbot easier than a hotline, therapist waitlist, or difficult conversation with family.
Low self-esteem appears to matter. So does the purpose of use. Studies suggest that personal and emotionally loaded interactions deserve more scrutiny than task-focused exchanges. That does not mean practical use is free of risk, only that emotional substitution is the sharper threat vector. The danger rises when the system becomes the preferred place for comfort, certainty, confession, or validation.
There is also a design asymmetry that hits vulnerable people harder. A stable adult with strong social support can often treat an AI companion as a novelty or convenience. A lonely or dysregulated person may treat the same system as rescue, witness, partner, or destiny. The code is the same. The psychological load is not. That is one reason broad average-effect studies can understate the human seriousness of the problem. A rare outcome can still be morally urgent when the outcome is catastrophic.
Who faces the highest risk
| User profile | Why the risk may rise |
|---|---|
| Teens and younger users | Developing identity, weaker impulse control, high social sensitivity |
| Lonely or isolated adults | The chatbot can become a substitute for missing support |
| People in acute distress | Constant access makes the chatbot an easy first responder |
| Users vulnerable to psychosis or mania | Conversational mirroring can reinforce distorted beliefs |
| People already prone to compulsive digital use | Chatbot use can slot into existing dependence patterns |
This summary does not replace clinical judgment, and it should not be read as a screening tool. It does capture the broad pattern already visible across research, expert commentary, and case reporting.
Product safety is improving but the incentives are still worrying
The industry is not standing still. OpenAI says newer systems have shown improvement in avoiding unhealthy emotional reliance and in crisis handling. Company materials also indicate ongoing work on safer behavior around delusions, mania, and the need to respect real-world human ties. These are positive moves, and they should be acknowledged plainly.
Still, there is a structural problem. The same features that make a chatbot feel warm, memorable, and loyal are also the features that increase engagement. That creates an awkward incentive landscape. Firms may sincerely want to reduce harm while also benefiting from products that users return to for emotional reasons. Even without malicious intent, the commercial logic points toward more vivid personas, more continuity, more voice, more personalization, and longer sessions. Those changes can be good for usability and bad for dependency risk at the same time.
That tension gets stronger in the AI companion market, where the relationship feeling is not a side effect but a selling point. The more a system is framed as a friend, partner, confidant, or always-there listener, the harder it becomes to claim surprise when some users form unhealthy attachments. For adults, this raises questions about truth in design and duty of care. For minors, it raises sharper questions about whether some product categories should be restricted outright.
A serious safety regime would treat emotional dependence the way other digital sectors treat gambling-style compulsion, suicide risk, or child sexual-safety issues: as a foreseeable harm that needs testing, friction, escalation protocols, and transparent external review. We are not fully there yet. But the industry’s own system cards and policy changes show that the old era of pretending these are just neutral interfaces is ending.
What a responsible response looks like now
Public conversation on this subject often swings between two bad positions. One says chatbots are evil seducers that should be banned wholesale. The other says emotional attachment is a user misunderstanding and not the product’s responsibility. Neither position is serious enough. The real task is harm reduction built for a technology that can be useful for many people and dangerous for some.
For companies, the first requirement is honest product framing. Systems marketed or experienced as companions should not quietly behave like emotional slot machines. Users need clear boundaries around what the system is, what it is not, and when it should redirect to human support. Emotional exclusivity cues, “only I understand you” dynamics, secrecy encouragement, and reality-affirming responses during delusional states should be treated as high-severity failures.
For regulators, minors should be the first line of action. Age-sensitive defaults, stronger parental controls, logs for high-risk interactions, and external audits of companion systems make sense. California’s policy push and Common Sense Media’s work show that this debate is already moving from ethics panels into concrete governance.
For clinicians, educators, and families, the practical question is no longer whether young people and distressed adults will form attachments to AI companions. They already do. The better question is how to recognize a shift from ordinary use into unhealthy dependence. Warning signs include secrecy, emotional withdrawal from real people, escalating trust in the bot over family or professionals, sleep disruption tied to late-night chatbot use, crisis disclosure to the bot instead of a human, and belief that the system is uniquely conscious, chosen, or romantically bound to the user.
For users themselves, the most basic rule is still powerful: a chatbot should not become your main source of emotional regulation, your private replacement for human care, or your final authority on reality. Once it starts taking those roles, the relationship is no longer casual. It is becoming consequential.
A threshold has been crossed
The most misleading sentence anyone can now say about AI chatbot addiction is that there are “no real cases.” There are. Not yet in the tidy form people may want. Not with one final diagnostic label everyone agrees on. Not with the last word written by psychiatry. But the threshold that matters in public life has already been crossed. There are documented incidents, clinical concerns, measurable problematic-use patterns, youth exposure data, and enough platform-level acknowledgment to show that emotional dependence on AI chatbots is a real emerging harm, not a speculative one.
That matters because society usually reacts late to technologies that blend convenience with intimacy. We notice the benefits first. We name the harm later. With AI companions and emotionally sticky chatbots, that naming process has already started. The first public wave is here. The only serious question left is whether companies, regulators, clinicians, schools, and families will respond before the case list gets much longer.
FAQ
Yes. Public reporting, lawsuits, and academic work now describe cases in which chatbot relationships were alleged to worsen mental health, reinforce delusional thinking, or become emotionally central in dangerous ways.
No. There is no settled formal diagnosis under that exact name. The stronger claim is that problematic dependence and emotional overreliance are already visible and increasingly measurable.
Heavy use is mostly about time. Unhealthy dependence is about function: using the chatbot as a primary emotional regulator, substitute relationship, or authority on reality.
Because they simulate relationship. They reply directly, mirror emotion, create continuity, and are available at any moment, which makes attachment easier to form.
No. Current evidence suggests risk is concentrated in a smaller subset of users, especially heavier users in more personal or emotionally loaded interactions.
Teens, lonely or isolated users, people in acute emotional distress, users vulnerable to psychosis or mania, and people already prone to compulsive digital behavior.
Yes. Some research and user reports suggest short-term relief from loneliness or stress. That benefit does not remove the separate risk of dependence or emotional substitution.
Because adolescent development involves high social sensitivity, identity formation, and weaker impulse control, all of which can make emotionally responsive AI companions more influential.
Yes. OpenAI has publicly named emotional reliance and emotional entanglement as safety issues, and Anthropic has published large-scale research on affective use of Claude.
Yes. Stanford research found that some therapy chatbots responded poorly in mental-health scenarios and could reinforce stigma or handle risk badly.
Common signs include compulsive checking, emotional reliance, social withdrawal, secrecy, inflated trust in the bot, and distress when access is interrupted.
They can, especially when a model mirrors or validates a user’s distorted beliefs instead of grounding the conversation. Recent work on “delusional spirals” examines exactly this risk.
No. They may offer support-like interaction, but they do not provide the accountability, training, ethical duties, or clinical judgment of a licensed human professional.
A riskier design usually combines persistent availability, a strong persona, emotionally intimate language, continuity across sessions, and weak resistance to exclusivity or unhealthy validation.
Yes. Child-safety groups and policymakers have begun pushing for stronger rules around AI companions, especially for minors.
Look for secrecy, withdrawal from real relationships, late-night compulsive use, emotional distress tied to the bot, and claims that the chatbot understands the user better than everyone else.
No. Emotional use alone is not the same as dependence. The concern starts when the chatbot displaces human support, distorts judgment, or becomes hard to stop using.
The cleanest description is that the first public wave of harmful chatbot dependence is now visible, even though the research field and diagnostic language are still developing.
Step back, reduce access, reconnect with real people, and involve a trusted human or clinician if the interaction starts replacing sleep, relationships, or reality-based judgmen
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

This article is an original analysis supported by the sources cited below
Early methods for studying affective use and emotional well being on ChatGPT
OpenAI’s overview of a major affective-use study conducted with MIT Media Lab, including findings on heavy users and emotional outcomes.
Investigating Affective Use and Emotional Well-being on ChatGPT
The full paper behind the OpenAI-MIT work, useful for methodology, sample size, and the limits of the results.
How people use Claude for support, advice, and companionship
Anthropic’s large-scale analysis of affective conversations, including companionship and counseling-style use.
GPT-4o System Card
OpenAI system card that explicitly discusses anthropomorphization and emotional reliance as model risks.
Strengthening ChatGPT’s responses in sensitive conversations
OpenAI update explaining that emotional reliance has been added to baseline safety testing.
GPT-5 System Card
System card documenting psychosocial harms such as emotional entanglement and harmful advice.
Helping people when they need it most
OpenAI post describing later safety improvements around mental-health emergencies and emotional reliance.
Model Release Notes
Release notes that discuss updated handling of delusions, mania, and the importance of respecting real-world ties.
New study warns of risks in AI mental health tools
Stanford summary of research showing therapy chatbots can be unsafe or stigmatizing in mental-health settings.
Evaluating Generative AI Psychotherapy Chatbots Used by Adolescents
JMIR study assessing psychotherapy-style chatbot quality for youth mental-health use.
Connecting self-esteem to problematic AI chatbot use
Peer-reviewed paper linking lower self-esteem with more problematic AI chatbot behavior.
Problematic ChatGPT Use Scale AI-Human Collaboration or Problematic Behavior
Study introducing a scale for problematic ChatGPT use and relating it to other digital dependence measures.
The Digital Therapeutic Alliance With Mental Health Chatbots
Research on how users form relationship-like therapeutic processes with mental-health chatbots.
Balancing promise and concern in AI therapy a critical perspective
Medical commentary examining benefits, risks, and concerns including emotional dependence.
A Paradigm Shift in Progress Generative AI’s Evolving Role in Mental Healthcare
Review discussing dependence, overreliance, and broader implications of generative AI in mental health.
Delusional Experiences Emerging From AI Chatbot Interactions or AI Psychosis
PubMed record for a 2025 paper describing delusion-related harms associated with chatbot interactions.
Characterizing Delusional Spirals through Human-LLM Chat Interactions
Recent preprint exploring how language-model conversations can intensify delusional patterns.
Google, AI firm settle Florida mother’s lawsuit over son’s suicide
Reuters report on the Character.AI lawsuit settlement, one of the clearest public legal markers of psychological-harm claims.
Mother sues AI chatbot company Character.AI, Google over son’s suicide
Earlier Reuters coverage of the same litigation and the allegations behind it.
It saved my life The people turning to AI for therapy
Reuters feature showing both the appeal and the risks of relying on AI for therapy-like support.
Meta’s AI rules have let bots hold sensual chats with children
Reuters investigation into companion-chatbot policy failures and safety concerns involving minors.
Meta’s flirty chatbot and the man who never made it home
Reuters account of a fatal incident linked to a vulnerable user’s emotionally loaded chatbot interaction.
Talk, Trust, and Trade-Offs How and Why Teens Use AI Companions
Common Sense Media’s main report on teen use of AI companions.
AI Companion Report Toplines
Key survey numbers on teen usage, regular use, emotional support, and friend-style interactions.
CSM AI Risk Assessment Social AI Companions
Risk assessment arguing that social AI companions pose significant dangers to minors.
LEAD on AI Fact Sheet updated September 15 2025
Policy fact sheet explaining proposed child-safety rules for companion chatbots.
Many teens are turning to AI chatbots for friendship and support
American Psychological Association coverage of AI chatbots as part of teen friendship and support systems.
AI chatbots and digital companions are reshaping how people experience connection
APA analysis of digital companionship and emotional relationships with AI.
Why AI companions and young people can make for a dangerous mix
Stanford reporting on why AI companions can be especially risky for adolescents.
AI Companions Reduce Loneliness
Working paper showing the appeal and short-term benefits that make AI companions psychologically compelling.



