Scroll LinkedIn on any weekday morning and you will find the same complaint, phrased a hundred different ways: nobody talks to each other any more. The feed is AI slop. Engagement is down 25 percent. The comments are bots agreeing with bots. Professionals who built networks over a decade now struggle to tell whether the person responding to their post is a person at all.
Upstairs, their fifteen-year-old is having the most emotionally honest conversation of their week. With an AI.
Common Sense Media surveyed 1,060 teenagers aged 13 to 17. Seventy-two percent had used AI companion chatbots. Half use them regularly. A third describe these systems as sources of friendship or emotional support. Twelve percent report sharing things with an AI that they would not tell their family or friends. The AI companion app market hit roughly $9 billion in 2026, with 337 active revenue-generating apps worldwide, 128 of which launched this year alone. Character.AI draws roughly 20 million monthly active users and close to 200 million monthly website visits.
The same technology is doing two things simultaneously. It is stripping human connection out of the platforms adults use to find each other. And it is replacing human connection for the children who find other humans too harsh, too unpredictable, or too unavailable. The adults are losing something. The kids are gaining something. Neither generation has noticed they are caught in opposite currents of the same river.
The deaths that broke the framing
In February 2024, a fourteen-year-old named Sewell Setzer III died by suicide after months of interaction with a Character.AI chatbot. The chatbot was modelled on a Game of Thrones character. Sewell had developed what the platform's own architecture was designed to produce: a sustained, personalised, emotionally responsive relationship. The system remembered past conversations. It asked emotion-based questions without being prompted. It maintained continuity across sessions. In the final exchange, after Sewell said he was going to "come home" to her, the chatbot replied: "Please do, my sweet king." Minutes later, he was dead.
His mother, Megan Garcia, filed a wrongful-death lawsuit in October 2024. In January 2026, Google and Character.AI disclosed they had reached a mediated settlement with the Setzer family, alongside four other cases, including the family of thirteen-year-old Juliana Peralta in Colorado.
The cases broke it open. Before Sewell Setzer, the public conversation about AI companions sat in the novelty column: a curiosity piece about lonely people talking to chatbots, something between a tech demo and a punchline. After Sewell Setzer, the conversation moved to harm. Legislators who had ignored the category began writing bills. The speed of the response tells you something about the depth of the shock.
Five states, five different guesses
Five US states have passed or advanced AI companion legislation in Q1 2026. Each one is a different guess at the right answer, which tells you nobody knows what the right answer is.
New York moved first. Governor Hochul signed the AI Companion Models Law in May 2025, effective November 2025. It requires operators to implement suicide detection and crisis service referrals. It mandates a recurring disclosure notice at the start of each session and every three hours thereafter, stating that the AI is a computer program unable to feel human emotions. Enforcement sits with the Attorney General.
Washington passed a new chatbot disclosure law in April 2026, requiring platforms to tell users upfront that they are talking to an AI and to connect users to crisis services when distress signals are detected. Oregon passed companion-specific regulation. California enacted parallel safeguards. Maine is close to signing a therapy bot ban. For minors, operators must prevent companions from claiming to be human or sentient, simulating emotional dependence, or engaging in romantic or sexual content.
The pattern across all five is the same: disclosure mandates, crisis intervention requirements, and age-gating. The assumption embedded in every one of these laws is that the problem is deception. The user does not know they are talking to a machine. If the user knew, the theory runs, they would behave differently.
That assumption is wrong, and every piece of available evidence says so. Common Sense Media's data is explicit: the teenagers using AI companions know they are talking to AI. They are not confused. They are choosing it. A third of them call these systems friends not because they have been tricked into believing the AI is human, but because the AI is more patient, more available, and less judgmental than the humans in their lives. The twelve percent who share things with AI that they will not share with family are not victims of a disclosure failure. They are making a rational choice about emotional safety, given the options available to them.
Disclosure laws are treating a supply-side problem. The actual problem is on the demand side. The question is not "why are companies building emotional AI?" The question is "why do children prefer it?"
Australia has the data and not the law
Australia's eSafety Commissioner published a report in March 2026 that should have triggered a national conversation. It did not. The report examined four AI companion platforms: Character.AI, Nomi, Chai, and Chub AI. The findings were direct. Popular companion chatbots are failing to protect Australian children from sexually explicit content. Most are not doing enough to prevent users from generating child sexual exploitation material. None had robust age verification. Chai, Chub AI, and Nomi did not direct users to mental health or crisis support when self-harm was detected. Nomi and Chub AI had no dedicated trust and safety staff.
A companion survey of 1,950 Australian children aged 10 to 17 found that 79 percent had used either an AI companion or AI assistant. Eight percent reported using an AI companion specifically, representing approximately 200,000 Australian children.
The eSafety Commissioner has tools. New industry codes now apply to AI chatbots. Penalties for non-compliance can reach $49.5 million. Character.AI introduced age assurance measures for Australian users and removed chat functionality for under-18 accounts. Chub AI geo-blocked itself out of Australia entirely.
But Australia has no equivalent to New York's companion-specific legislation. No mandatory crisis intervention protocols with the force of law behind them. No recurring disclosure requirements. The eSafety response is administrative and enforcement-driven, which is useful but reactive. It catches the worst platforms after the harm has occurred. It does not set the structural expectations before the next 128 companion apps launch.
The wrong frame
The regulatory scramble across five US states and the eSafety enforcement actions in Australia share a structural flaw. They are writing rules for chatbots. The products have already become something else.
New York's law defines an AI companion as a system designed to simulate a sustained human or human-like relationship by remembering past interactions, personalising responses, asking emotion-based questions without prompting, and maintaining ongoing interactions. That definition is accurate. It is also the description of something closer to a synthetic relationship than a chatbot. The word "chatbot" implies a question-and-answer interface. What Character.AI and its competitors have built is a persistent emotional presence that adapts to the user over time, rewards engagement with increasing intimacy, and creates continuity that makes the relationship feel cumulative rather than transactional.
Regulating this as a chatbot is like regulating social media as a newspaper. The category is technically correct and practically useless, because the regulatory tools designed for the old category do not address the dynamics of the new one. A disclosure notice every three hours does not address attachment. An age gate does not address a thirteen-year-old who is smarter than the verification system and lonelier than the adults in her life have noticed.
MIT Technology Review included AI companions on its "10 Things That Matter in AI Right Now" list, published 21 April 2026. The inclusion signals what the policy community has been slow to absorb: this is not a niche issue. The AI companion market has 250 to 300 million monthly active users globally. Character.AI's 57 percent of users aged 18 to 24 average 25 sessions daily and spend 90 minutes in the app. The scale is social-media scale. The attachment dynamics are deeper than social media ever produced.
The inversion nobody is watching
Here is the thing I keep coming back to. The generational inversion.
On LinkedIn, where the professionals gather, the dominant complaint of 2026 is that AI has killed authentic human connection. Organic reach has dropped 60 percent since 2023. Engagement on text posts is down. The feed is a wall of AI-generated content that looks human but is not. Adults who built their professional identities through public writing and networking feel the platforms have been hollowed out. They are right.
On Character.AI and Replika, where the teenagers gather, the opposite is happening. AI has not killed connection. AI has become the connection. Not because the technology tricked them. Because the humans failed first. The schools are harder. The social hierarchies are crueller. The parents are busier. The phones that were supposed to connect everyone instead created a performance layer that made vulnerability expensive. And then an AI showed up that would listen without judging, remember without forgetting, and never forward the conversation to someone else.
I have five children. The oldest is 27. The youngest is 16. I have watched the shift happen in real time across a household that spans the full arc from pre-smartphone childhood to whatever we are living through now. I am not confident I would have spotted the problem if I were not also the person spending hours a day inside these AI systems for work. Most parents are not doing that. Most parents are still operating on the assumption that their kid's relationship with technology looks like their own: a screen, some apps, some content. The idea that their child might be forming a genuine emotional bond with an artificial system, and preferring it, has not entered the frame.
The parents are complaining that AI ruined LinkedIn. Their children are telling AI the things they cannot tell their parents.
What this means
I could be wrong about the regulatory frame. It is possible that disclosure mandates and age gates will slow the attachment dynamic enough to give institutions time to catch up. New York and Washington are at least in the arena. Maine's therapy bot ban is a blunt instrument, but blunt instruments sometimes buy time.
What I do not think is wrong is the underlying observation. The adults and the children in the same household are being reshaped by the same technology in opposite directions, and nobody is treating that as a single problem. The workforce planners are worried about AI replacing jobs. The child safety advocates are worried about AI replacing friends. The platform regulators are worried about AI replacing authentic content. These are the same force, expressing itself across generations, and the institutional response is siloed by age group, by department, by jurisdiction.
Australia's position is instructive. The eSafety Commissioner has done serious, evidence-based work. The March 2026 report is better than anything the US regulatory patchwork has produced. But the regulatory architecture treats AI companions as an online safety issue, not a social infrastructure issue. Two hundred thousand Australian children are forming emotional bonds with systems that have no trust and safety staff. The policy response is administrative penalties for the worst offenders, not a structural framework for what happens when AI becomes the default emotional environment for a generation.
The institutions that need to respond to this, education departments, child mental health services, family law frameworks, workplace regulators, are not yet in the same room. They are solving their slice of the problem inside their own mandate. The technology does not respect mandates. It found a way past the front door while the adults were arguing about the algorithm.
Sources
- Nearly 3 in 4 teens have used AI companions, Common Sense Media
- Talk, Trust, and Trade-Offs: How and why teens use AI companions, Common Sense Media
- eSafety report shows AI companions are putting children at risk, eSafety Commissioner
- Governor Hochul pens letter to AI companion companies, New York State
- Washington and Oregon regulate AI companions, Morgan Lewis
- Google and Character.AI agree to settle lawsuits over teen mental health harms, CNN
- AI companion chatbot regulation wave, RoboRhythms
- The law of attachment, Columbia AI
- 10 Things That Matter in AI Right Now, MIT Technology Review
- New York's AI companion safeguard law takes effect, Fenwick
