Investing

Is It the Government’s Job to Make Sure Chatbots Are Safe for Kids?

Jennifer Huddleston and Christopher Gardner

Over the past year, several heartbreaking accounts of young people who have had tragic and damaging experiences with AI chatbots have emerged. Whether it’s concerns about AI conversations leading young people into overly sexual content or about those who have taken their own lives, both parents and policymakers are asking if these tools are safe for kids and teens.

These are important and difficult questions about child development and technology, and they raise the equally important question of whether parents or government intervention should decide if AI chatbots are safe for kids and teens. As with many technological youth safety questions, even the question itself is not easily agreed upon. The reality is that different risk preferences and cultural values will result in different concerns. Consequently, the one-size-fits-all approach of most significant government intervention not only comes with additional trade-offs but can also fail to solve the specifics of individual needs and concerns.

Understanding the Concerns About Kids, Teens, and Technology

AI chatbots are not the first new technology to raise concerns about safety for children and teens. Before concerns about “AI psychosis,” parents worried about what everything from dime store novels and comic books to video games and the internet were doing to the next generation’s morals and mental health. This is not to say that concerns about an individual’s media consumption are always misguided but rather that at a societal level, these concerns often are far more nuanced and complicated than headlines or politicians may acknowledge. 

The most recent example of such a panic surrounds the internet. The 1990s and 2000s were filled with concerns about the harmful content that children and teens might encounter online. Such concerns have continued with the rise of social media. Of course, there was and remains content that would be inappropriate; however, it was not because regulators intervened but rather that industry members and separate actors created solutions to handle a range of concerns and parenting choices. Yes, children can and still do access problematic content online. But parents have tools at every level of the internet—from their internet service provider to individual controls on specific apps—that help address individual concerns and preferences.

From relatively early on, the market responded with various parental control tools to empower users and parents to respond to the unique concerns and values of their households. In fact, the emergence of such tools was considered a less restrictive alternative to government regulation when potential child online safety laws were enjoined or struck down in court challenges on First Amendment grounds.

Of course, some concerns about AI chatbot interactions are quite distinct. For example, questions abound about how developmentally a child might interact differently with a chatbot than a human and if they would understand the difference. Culturally sensitive questions might vary from family to family, such as what an ideal response from an AI chatbot would be on topics of culture, faith, or sexuality. Young users may already or soon turn to chatbots to ask questions that they are not yet comfortable asking their parents or to challenge their family norms. For example, should a chatbot have to ask how old a user is before answering questions about whether Santa Claus is real? What about if the user asks who Jesus is or why my friend has two daddies?

When determining what information is or is not appropriate, it is easy to arrive at gray areas. Outlawing responses on certain types of topics, such as self-harm, drug use, or eating disorders, may seem benign or beneficial. That approach, however, could prevent directing someone to resources when they seek help on those topics. It could also quickly get into gray areas and restrict speech that might be more circumstantial. For example, responses to a question about how many calories I should eat or whether this is a healthy diet could be triggering for someone recovering from an eating disorder but helpful to someone trying to start healthy habits. The same is true for any array of topics for which the internet or AI could be both helpful and harmful depending on the particular vulnerabilities of an individual.

Some of the questions about young people and chatbots are tied to broader discourse around issues such as privacy. Many individuals of all ages wonder about data privacy in the age of AI and how private and government actors might use or abuse the data both publicly available and collected in interactions with large language models. Some specifically question the level of data an AI system may collect on young users either to determine their age, such as analyzing chats, biometrics, or behaviors, or the ability to consent more generally through the interactions young users have with these systems.

Drawing on the experience of prior general-purpose technologies may provide a road map for what the government’s role should be in protecting children and what is better left to parents, particularly given the diversity of values and concerns. For example, the Children’s Online Privacy Protection Act, or COPPA, provides regulation and enforcement on the use of data from children under 13 years old, recognizing that such children have different vulnerabilities than older teen and adult users. Similarly, existing laws around child sexual abuse materials and other illegal content already exist and could be clarified to apply to the creation of such images with AI. Notably, however, these laws do not regulate the overall speech or information available but rather are tied to specific harms and vulnerabilities.

What Is Currently Happening with Policy Around Kids and Chatbots?

Wanting to keep children and teen safe is a powerful motivation for many policymakers. Unsurprisingly, in response to both parental concerns and heartbreaking stories, some policymakers at all levels of government have proposed regulation that would restrict kids and teens’ use of chatbots. However, similar to many existing youth online safety regulations, these bills could have negative impacts on the privacy rights of those they are designed to protect by requiring increased data collection and the collection of more specific and sensitive data. The more zealous approaches that require invasive steps for verification or ban the use of chatbots by young users could make overall access more difficult, thus limiting the beneficial uses of the technology by young people and adults.

At a federal level, while Congress has largely focused on a light-touch approach to AI, recent high-profile lawsuits against large chatbot providers regarding tragic interactions with young people appear to have spurred potential regulation. The AWARE Act, proposed by Representative Erin Houchin (R‑IN), would instruct federal agencies to develop and deploy education resources for parents and children on the safe use of AI. By focusing on education first and creating ways to empower users and parents to find the best solutions to fit their values, such an approach is less likely to run afoul of speech or privacy concerns. 

But other proposals might place regulatory restrictions that could prevent the use of beneficial AI and even render young users vulnerable in other ways. These proposals do not focus on educating and empowering users but rather on restricting the use of a technology due to its potential risk. They usually require additional data collection of IDs and biometrics for all users. For example, Senator Jon Husted’s (R‑OH) CHAT Act represents an attempt to enforce a static, one-size-fits-all approach to online safety by requiring strict age verification for all AI chatbots. More recently, Senators Josh Hawley (R‑MO) and Richard Blumenthal (D‑CT) have continued to push for a far more aggressive approach to preventing young people from using chatbots with their latest bill. The core of this bill is a new age-verification requirement for all AI companies and the banning of AI companions for minors. It also introduces new disclosure requirements and criminal penalties for companies whose AI companions encourage suicide or sexually explicit conduct. Such an approach, however, could extend to many of the types of “chatbots” that a child or teen might encounter in their day-to-day lives, such as interacting with customer service online to make a return or using a smart speaker to tell jokes or read a story aloud.

While well-intentioned, this bill presents significant concerns when it comes to speech and privacy online, particularly through its age-verification requirements. Such proposals not only affect kids and teens but adults too. The only way for companies to verify that someone is not under the age of 18 is to ultimately verify all users. This means companies would be responsible for collecting, and in some cases, maintaining their users’ photo IDs, biometrics, or other similar records. This type of regulation could put at risk the privacy of millions of Americans, including the young users it is meant to protect, by requiring them to upload sensitive personally identifiable information to prove their age. For example, Discord suffered a breach of some 70,000 IDs that were collected in compliance with a UK age-verification law.

State legislatures have also considered significant regulations when it comes to AI chatbots. Most states have chosen to follow the light-touch approach of the federal government and either hold off on the regulation of AI or introduce limited requirements like Maine’s disclosure mandate. Such laws may raise questions about when they apply or other potential issues around how these disclosures must be made. In general, however, they are less likely to raise the more significant speech and privacy concerns of more invasive laws seen in other state proposals.

Some states, including California, Illinois, and North Carolina, have instead chosen to take a more assertive approach to the potential regulation of AI chatbots. California’s SB 243, effective at the start of 2026, would institute two core requirements for the operators of chatbots. The first involves the development and public disclosure of protocols to prevent the production of suicidal ideation, suicide, or self-harm content. The second requires operators to track and report to the California state government on both the protective protocols and instances of crisis service provider referrals. Illinois’ HB 1806, effective August 1, 2025, attempts to take a more surgical approach to govern the use of AI specifically in the context of mental health. However, its overly broad restrictions on the provision of therapy by AI chatbots have raised significant enforcement concerns. North Carolina’s SB 624 is currently pending and is the most extreme out of these three bills. The bill spans nine pages and introduces an array of new requirements, including licensing restrictions, government oversight, and a “duty of loyalty.” Such legislation raises both the concerns about speech and privacy seen in state youth online safety legislation as well as concerns about a potential patchwork of state-level AI regulation that could deter beneficial developments of the technology.

What Is Currently Happening in Industry and Civil Society Around Kids and Chatbots?

Consumer generative AI and chatbots are still relatively new. Many tools for parents and users to set preferences around specific sensitivities are currently evolving. However, it does seem that industry and civil society are responding to these concerns. After all, businesses have an incentive to create the features their customers demand to use their products in positive and beneficial ways.

Some of the most popular services are developing product-specific solutions. For example, ChatGPT introduced parental controls that allow parents to link accounts, set time limits, and restrict certain types of content. Meta has created control options that allow parents to go as far as disabling one-to-one chats with AI characters in its apps.

Some existing tools may also help parents resolve concerns. For example, many devices offer parents the option to set limits for screen time on a device as a whole or for specific apps. Similarly, existing parental controls allow parents to monitor the sites or apps that young people are using. While there are concerns about parents abusing these tools or older teens and young people getting around them, policy mandates would not resolve these concerns.

Societal conversations with young people around AI chatbots and general technology use are needed most. It is on parents to have difficult conversations about what and why a child is engaging with a certain app or spending so much time on their device. Here, civil society and policymakers can offer tools to help parents navigate these conversations while still allowing them to ultimately find solutions that work best for each child and family. In one example, the Family Online Safety Initiative offers a series of conversation starters for parents to talk about generative AI with their children.

As with other technologies before, we will likely see a continued evolution of these solutions both from the industry itself and from outside sources. Most likely, best practices will emerge for common case scenarios and concerns while still allowing options that may serve communities with different needs or values.

Conclusion

Young people are often the earliest adopters of new technologies. This creates new fears and concerns both from the rapid nature of change and from legitimate safety risks. As new technologies emerge, many innovators and civil society are stepping up to provide parents and users with various controls that can help them navigate certain risks. The ideal solution to these concerns will look differently for every family and child.

It is tragic that some young people have had experiences with chatbots that have led to serious harm or even loss of life. However, policy solutions such as youth online safety laws will not remove this risk and would instead create a new host of problems and concerns for both adult and young users.