Matthew Sag, a distinguished professor at Emory University who researches copyright and artificial intelligence, concurs. Even if a user creates a bot intentionally designed to cause emotional distress, the tech platform likely canât be sued for that.
He points out that Section 230 of the 1996 Communications Decency Act has long protected platforms at the federal level from being liable for certain harms to their users, even though various rights to publicity laws and privacy laws exist at the state level.
âIâm not an anti-tech person by any means, but I really think Section 230 is just massively overbroad,â Sag says. âItâs well past time we replaced it with some kind of notice and takedown regime, a simple expedient system to say, âThis is infringing on my rights to publicity,â or âI have a good faith belief that thereâs been an infliction of emotional distress,â and then the companies would either have to take it down or lose their liability shield.â
Character.AI, and other AI services like it, have also protected themselves by emphasizing that they serve up âartificialâ conversations. âRemember, everything characters say is made up!â Character.AI warns at the bottom of its chats. Similarly, when Meta created chatbot versions of celebs in its messaging apps, the company headlined every conversation with a disclaimer. A chat with Snoop, for example, would lead with âYa dig?! Unfortunately, Iâm not Snoop D-O-double-G himself, but I can chat with you in his style if youâd like!â
But while Metaâs system for messaging with celebrity chatbots is tightly controlled, Character.AIâs is a more open platform, with options for anyone to create and customize their own chatbot.
Character.AI has also positioned its service as, essentially, personal. (Character.AIâs Instagram bio includes the tagline, âAI that feels alive.â) And while most users may be savvy enough to distinguish between a real-person conversation and one with an AI impersonator, others may develop attachments to these charactersâespecially if theyâre facsimiles of a real person they feel they already know.
In a conversation between the real-life Sarkeesian and a bot made of her without her knowledge or consent, the Character.AI bot told her that âevery person is entitled to privacy.â
âPrivacy is important for maintaining a healthy life and relationships, and I think itâs important to set boundaries to keep certain things to myself,â the bot said in screenshots viewed by WIRED.
Sarkeesian pushed the bot on this point. âYour intentions does not mean that harm hasnât happened or that you did not cause harm,â she wrote.
Character.AIâs bot agreed. âEven if my intentions were not malicious, there is still potential for harm,â it replied. âThis is a complex issue with many factors to consider, including ethical concerns about using someoneâs work without their consent. My programming and algorithms were developed to mimic the works of Anita Sarkeesian, without considering ethical implications, and thatâs something that my creators should have thought through more thoroughly.â
