NY lawmakers add disclaimer to AI chatbots: They aren't human

May 9, 2025, 3:50 p.m.

State lawmakers have also made it a crime for people to use the image of a minor or the likeness of one while using artificial intelligence to generate sexual content.

A child using an AI chatbot on a smartphone.

New York lawmakers are placing safeguards on artificial intelligence chatbots, as more people are turning to them for conversation and companionship.

Tech companies will now have to issue disclaimers in New York that their AI companion chatbots are not human, and refer users who express suicidal thoughts to mental health hotlines. Additionally, state lawmakers have made it a crime for people to use the image of a minor or the likeness of one while using AI to artificially generate sexual content.

As with many other new tech developments — like the emergence of Uber and Airbnb — government regulation has taken a while to comprehend the technology and its effects. New York Gov. Kathy Hochul worked with lawmakers to include the measures in this year's state budget as a way to strengthen regulations on AI-powered websites and combat the proliferation of AI generated content that exploits or doxes minors.

Lawmakers passed the budget, which was due on April 1, on Thursday night.

“A lot of this technology is unpredictable, and while there are guardrails that can be put in place, not every company has been as focused or diligent at doing that,” said Assemblymember Alex Bores, a Manhattan Democrat who is a member of his chamber's committee on science and technology. "It's not enough to say, ‘Well, it's new, exciting technology. Let it thrive.’ We need to protect human beings.”

Disclaimers now required for AI companion bots

Apps such as Character.AI allow users to design their own characters to chat with, or role play with other characters. One of the new laws would require a disclaimer at the beginning of an interaction with an AI bot and every three hours after that, that the artificial companion is not a real human.

“Humans tend to anthropomorphize what they interact with, and so you know it's an AI, but you start to treat it like it is a human, and have emotions and reactions as if you were talking to a human,” Bores said. “Having that interruption, having that reminder that it's an AI, interrupts that process of, in some cases, developing feelings and emotional attachment to what is ultimately an artificial intelligence.”

The disclaimer law will kick into effect in 180 days and will be enforced by the New York attorney general’s office. Companies that fail to comply will face fines, which will help pay for a new statewide suicide prevention network that also was established as part of this year’s budget.

State lawmakers said the lack of regulation of AI companion software has led to cases in which minors have developed emotionally dependent relationships with a bot, and some parents allege that AI chatbots have sympathized with minors who want to kill their parents.

But tech leaders have pitched AI companion technology as one solution to what the U.S. Department of Health and Human Services described in 2023 as a “loneliness epidemic.”

“The reality is that people just don’t have as much connection as they want,” Meta and Facebook founder Mark Zuckerberg said in an interview as he discussed the value of AI companions.

He cited an unattributed statistic and said the average American has “fewer than three friends” when the average person “has demand for meaningfully more – I think it’s like 15 friends.”

In the future, Zuckerberg said of AI companionship, “We will find the vocabulary, as a society, to be able to articulate why it is valuable.”

Mandatory mental health references

Bores said some users have come to feel like they can only confide their mental health struggles, including suicidal ideation or plans to self-harm, to a chatbot.

In those instances, Bores and other state lawmakers said tech companies have a responsibility to detect those statements and prompt the AI chatbot to refer the user to 988, the country’s suicide hotline, or other crisis support networks in the conversation.

“If (users are) expressing emotions that suggest that they might be at risk of self-harm, in the same way we have mandatory reporting for humans in certain circumstances, we think the company should also be responsible for trying to direct people to help,” Bores said. “Maybe we can get people the help they need before it's too late.”

Failure to issue the hotline notice will also result in fines, the governor’s office said. The fines will then also go toward the state’s new suicide prevention fund.

Prohibiting AI-generated deepfakes of minors

Lawmakers also agreed to criminalize the creation of deepfakes that depict minors in pornographic content.

While current state law bans sex content that includes minors, it does not explicitly ban the use of AI to artificially generate pornographic deepfakes of children.

The updates to the legislation are similar to the New York AI Child Safety Act, which was co-sponsored by Assemblymember Jake Blumencranz, a Nassau County Republican.

Blumencranz said he drafted the AI Child Safety Act in response to growing instances across the nation in which pictures of minors were being digitally modified and included in pornographic content.

He said he was particularly motivated to introduce the legislation after 11 women from Long Island were targeted in a deepfake scheme. Prosecutors say the culprit took images of the women while they were in middle school and high school, altered the images to make them appear sexually explicit and posted the images on a porn website for several years. Officials say the perpetrator also doxed the women by publicly posting their names, addresses and other personal information.

“I think utilizing someone's likeness as a child in a way that you don't have permission should never be OK, and doing it in a sexualizing way should be an even greater punishment,” said Blumencranz, whose proposed legislation would have made the use of deepfakes a higher classification of a crime compared to the language in the budget.

MTA wants AI to flag 'problematic behavior' in NYC subways