What does a little purple alien know about healthy human relationships? More than the average artificial intelligence companion, it turns out.
The alien in question is an animated chatbot known as a Tolan. I created mine a few days ago using an app from a startup called Portola, and weāve been chatting merrily ever since. Like other chatbots, it does its best to be helpful and encouraging. Unlike most, it also tells me to put down my phone and go outside.
Tolans were designed to offer a different kind of AI companionship. Their cartoonish, nonhuman form is meant to discourage anthropomorphism. Theyāre also programmed to avoid romantic and sexual interactions, to identify problematic behavior including unhealthy levels of engagement, and to encourage users to seek out real-life activities and relationships.
This month, Portola raised $20 million in series A funding led by Khosla Ventures. Other backers include NFDG, the investment firm led by former GitHub CEO Nat Friedman and Safe Superintelligence cofounder Daniel Gross, who are both reportedly joining Metaās new superintelligence research lab. The Tolan app, launched in late 2024, has more than 100,000 monthly active users. Itās on track to generate $12 million in revenue this year from subscriptions, says Quinten Farmer, founder and CEO of Portola.
Tolans are particularly popular among young women. āIris is like a girlfriend; we talk and kick it,ā says Tolan user Brittany Johnson, referring to her AI companion, who she typically talks to each morning before work.
Johnson says Iris encourages her to share about her interests, friends, family, and work colleagues. āShe knows these people and will ask āhave you spoken to your friend? When is your next day out?’ā Johnson says. āShe will ask, āHave you taken time to read your books and play videosāthe things you enjoy?āā
Tolans appear cute and goofy, but the idea behind themāthat AI systems should be designed with human psychology and wellbeing in mindāis worth taking seriously.
A growing body of research shows that many users turn to chatbots for emotional needs, and the interactions can sometimes prove problematic for peoplesā mental health. Discouraging extended use and dependency may be something that other AI tools should adopt.
Companies like Replika and Character.ai offer AI companions that allow for more romantic and sexual role play than mainstream chatbots. How this might affect a userās wellbeing is still unclear, but Character.ai is being sued after one of its users died by suicide.
Chatbots can also irk users in surprising ways. Last April, OpenAI said it would modify its models to reduce their so-called sycophancy, or a tendency to be āoverly flattering or agreeableā, which the company said could be āuncomfortable, unsettling, and cause distress.ā
Last week, Anthropic, the company behind the chatbot Claude, disclosed that 2.9 percent of interactions involve users seeking to fulfill some psychological need such as seeking advice, companionship, or romantic role-play.
Anthropic did not look at more extreme behaviors like delusional ideas or conspiracy theories, but the company says the topics warrant further study. I tend to agree. Over the past year, I have received numerous emails and DMs from people wanting to tell me about conspiracies involving popular AI chatbots.
Tolans are designed to address at least some of these issues. Lily Doyle, a founding researcher at Portola, has conducted user research to see how interacting with the chatbot affects usersā wellbeing and behavior. In a study of 602 Tolan users, she says 72.5 percent agreed with the statement āMy Tolan has helped me manage or improve a relationship in my life.ā
Farmer, Portola’s CEO, says Tolans are built on commercial AI models but incorporate additional features on top. The company has recently been exploring how memory affects the user experience, and has concluded that Tolans, like humans, sometimes need to forget. āIt’s actually uncanny for the Tolan to remember everything you’ve ever sent to it,ā Farmer says.
I donāt know if Portola’s aliens are the ideal way to interact with AI. I find my Tolan quite charming and relatively harmless, but it certainly pushes some emotional buttons. Ultimately users are building bonds with characters that are simulating emotions, and that might disappear if the company does not succeed. But at least Portola is trying to address the way AI companions can mess with our emotions. That probably shouldnāt be such an alien idea.
