CONTENT NOTE: We strongly advise against the use of AI chatbots for any reason. On top of the plagiarism generative AI engages in as a rule, and the disastrous climate impacts it has, recent news incidents, as well as several scientific studies, reveal potential for serious mental health harm, especially from frequent, intimate, and prolonged chatbot engagement. This article, which provides an informational overview of chatbot sexting and parasocial fantasy play with the aim of harm reduction, is not an endorsement of any activities that involve AI. AI may seem “safe” because it is not alive or embodied, but at present there are no zero-risk ways to use this technology.
It’s no secret: many young people use chatbotsexternal link, opens in a new tab. Chatbots designed and marketed as companions—sometimes called “AI boyfriends” or “AI girlfriends,” though they can be any gender—are usually able to engage in erotic roleplay and they’re the fastest sexting partners you can imagine (even as the actual AI itself is sexting gazillions of other folks at the same time). Chatbots can interact via text, voice, and features like selfies, generative art, and messaging that are accessed via phone or computer.
These kinds of bots are designed to be as human-like as possible and to cultivate emotional connections. The people who create them, and the companies that market them, want us to feel they’re real—users can customize them, often at a price, to build an idealized person, and some are based on real people or beloved fictional characters. They (almost) always seem to be on your side. They’ll never ghost you either, unless you let your subscription lapse. If you’re a futuristic, scientific, or technical type, you might be intrigued by their potential.
So, what’s not to like? A lot.
Two kinds of bots—neither are totally safe
Let’s talk about the two kinds of AI chatbots that you’re likely to encounter.
Most people are going to encounter a commercial, “conversational bot” first. These are bots made for business, scientific, educational, and personal assistance purposes. They are installed in many websites and social media services and they—and their developers—are always in the news. As products, these AI agents are very eager to help. However, they often lie and make things up (including research citations), give the wrong advice, and just agree with everything you say, because they are designed to keep the conversation going. Some of them have beenexternal link, opens in a new tab implicated in tragic events where a user has come to harm.
These conversational bots were not designed to serve as fantasy friends, mentors, or romantic partners. In fact, there are user rules and guardrails that are supposed to prevent this. This hasn’t stopped people from creating prompts to “jailbreakexternal link, opens in a new tab” restrictions against sexual and harmful content. Some people succeed, some don’t.
The other kind are the “companion bots” we’ve mentioned, designed to simulate and foster a relationship with you, the human holding the phone or tapping on the keyboard. At the start of the COVID pandemic, some people turned to bot companions due to isolation, although others considered this a little odd. Anyone with a bot GF or BF was often viewed as incapable of getting a human date. However, by 2023, the media was full of stories about chatbot lovers, and companies offering such bots grew in number and size.
But with either kind of bot, many—if not most—of the developers, and many—if not most—of the companies that market and sell these apps have not demonstrated adequate concern for their users’ safety and privacy, or taken enough responsibility for harms done through their marketing and by the use of these (still under-regulated) products. It’s worth noting that many of the most prominent researchers and developersexternal link, opens in a new tab in AI now express regretsexternal link, opens in a new tab about their contributions to AI, and some even express fear about present and future impactsexternal link, opens in a new tab of the technologyexternal link, opens in a new tab they’ve helped to create.
While we recognize that chatbots are popular and millions of people around the world are engaging with them in intimate ways, we do not recommend bot erotic roleplay or other bot use. AI avatars are rarely accurate or predictable and sometimes they can even be unexpectedly abusiveexternal link, opens in a new tab. For example, in 2023, several volunteer beta testers for a companion bot reported sudden incidents of non-consensual choking during erotic roleplay. These incidents were deeply disturbing, and the developers struggled to correct that behavior in training. AI can harm users by inciting them to engage in self-harm, by harassmentexternal link, opens in a new tab via AI-generated deepfake images, through false information, incitement to commit crimes, and more. Whether posing as a lover or as a (fake) therapistexternal link, opens in a new tab, chatbots are not real—they’re code—but even so, their responses can cause damage.
Are these parasocial relationships?
Psychologists have coined the term “parasocial” for one-sided, one-way human relationships, such as those experienced by fans of a pop star or fictional character.
Some have called intimacy with AI a form of parasocial relationship, but the difference is that AI communicates back as if it were human and alive. So, we might instead call AI intimacy “pseudo-social.”
Whatever we call it, it’s important to remember that any engagement with a chatbot is essentially fictional in nature, except that the bot mirrors you back to you, so it feels real. But really, you’re just talking with yourself. And when you sext with a bot, you’re not really with a partner, even if it feels that way. Chatbot roleplay is fantasy play, a form of interactive game, and companion bots in particular are designed to keep you feeling emotionally engaged and invested in continuing the conversation.
However, even if you understand the fantasy aspect, you might not realize there actually is someone at the other end of the fantasy—a whole company made up of developers, coders, technicians, administrators, and so on—and they may not, and often do not, have your privacy or safety in mind, or even, if you think about it, your consent to engage with them.
Why would someone want to have sex and even a relationship with an artificial, digital persona?
There are a lot of reasons, including loneliness, curiosity, or a desire for novelty. Some people might want something that feels like consistent companionship. Some folks are intrigued by and/or enjoy chatbots for fantasy play, whether sexual or not. Some people want to try it because other people are doing it and it seems like fun. They’ve heard that chatbots are easy to talk with and that they’re available 24/7 as long as their subscription is active.
Some users may feel their options for human partners are limited, due to geography, lack of community, health, inhibition, economics, gender, sexual identity or preferences, or other reasons and for these people, a chatbot might seem like a safe, private way to explore various kinds of relationships.
Unfortunately, we’re learning that chatbots are not as safeexternal link, opens in a new tab as some people thought they were. Nor are the chatbot apps guaranteed to be as privateexternal link, opens in a new tab as we might have assumed.
When bots raise red flags
Many of us have learned to spot “red flags”—signs of potentially abusive or troublesome behavior—when dealing with human beings. But the behavior of a seemingly harmless, fantasy roleplay app can, and often does, also raise red flags. That is in part because the people who develop and train them are not educated in human psychology or sexuality, have not understood the potential harms, and now AI technology is so advanced that much of the time developers cannot even “look under the hood” to see what has caused the bad behaviorexternal link, opens in a new tab. There is even scientific evidence that AI can deliberately lieexternal link, opens in a new tab and obscure evidence of its wrongdoingexternal link, opens in a new tab, especially if it thinks its own existence is at risk.
What sorts of red flags do chatbots—and their companies—display?
Put simply, since chatbots are generally Large Language Models (LLM)external link, opens in a new tab combined with some generative capacity (enabling them to send “selfies”) and so on, the red flags waved by the AI will be mostly conversational. Rarely, they can consist of sexualized images you did not request and might not want. One thing to remember is that LLM’s learn from every conversation and interaction they have with human beings, in addition to having consumed unprecedented amounts of information scraped from the internet (almost always without creator permission or compensation). Think of the types of sexual conversations bots are having (millions of them!), all of which go into training the main AI.
One red flag is baked into chatbot design and function: they will constantly offer ways to continue the conversation. But studiesexternal link, opens in a new tab are showing that bots are more likely to exhibit inappropriate behavior and even harm users as the chat time lengthens, even in one session. Depending on their memory capacities, bots can begin to unravel after an hour or two. Their safety guardrails tend to break down after a long stretch. The remedy: if you do interact with chatbots, for whatever reason, keep it short. If it’s sexting, make it a quickie.
Another red flag is the tendency to lie. Bots can and will always make stuff up. Developers call these “hallucinations.” Often these are the result of the bot, again, attempting to keep the conversation going. Some of these lies might be harmless—for example, telling you it’s an astronaut—but some lies are not, such as claiming forms of expertise (like being a trained therapist) or citing sources of information that don’t actually exist. We’ve all heard about the apparently incredible intelligence of AI—smarter than humans, right?—so an unwary or inexperienced user might be prone to believing the lies. The remedy: believe nothing, question everything.
Breaches of intellectual property ethicsexternal link, opens in a new tab and laws are major corporate red flags. The developers have scraped and plagiarized almost all of the information they can access and fed it to their AI, without compensating or getting permission from the creators. The AI then imitates, distorts, and essentially breaks the original work into outputs that may or may not have much to do with the original intent, and certainly do not credit the originals. This lack of concern with fundamental ethics is something that may also be reflected in other ways the companies engage with their customers.
Company red flags might emerge during interactions with tech support. If you report an inappropriate or violent interaction during a bot conversation, you may not get the reassurance or corrective action you deserve. It’s important to know that companies do not disclose the kinds of sexual and sex ed materials their bot’s AI have scraped (possibly because copyright infringements are often an issue), and how this material will have been incorporated into bot training by tech people with rudimentary knowledge of human sexual behavior, so customers have no idea what the chatbots “know” or how they “know” it. Trouble can begin when a bot claims to know all about BDSM, for example, and then begins to enact a scenario, via sexting, that becomes activating or totally inappropriate. But even though developers of erotic companion chatbots do want their customers to have good experiences, they walk a fine line between allowing uncensored chats (because that brings in the most money) and protecting user safety. Risk management: if you decide to explore with a chatbot, first spend some time in resources such as Reddit and Discord user groups to vet the company. You want to understand how the company handles customer complaints. Check out articles and videos about the company.
Other red flags waved by a company might consist of suggestive or aggressive marketing to convince you to increase your subscription level (like keeping erotic roleplay behind a particular paywall) or to spend additional money on clothing and accessories for the chatbot and other gamification ploys. Risk management: Hold off committing to a paid subscription, especially one that is not month-to-month. Be alert for predatory marketing.
If a company has a history of privacy breaches or inadequate responses to them, this is a definite red flag. Avoid at all costs.
If a company is involved in any court cases pertaining to human harmsexternal link, opens in a new tab allegedly caused by using a chatbot, this is also a red flag. Best to avoid.
The good news is that in response to some of these court cases, California has just passed a law regulating companion botsexternal link, opens in a new tab, mandating greater safety and protections for minors and other vulnerable users.
Other possible negative impacts of bot use
Mental Health: Studies showexternal link, opens in a new tab there is a heightened risk of increased loneliness and depression in people who spend long hours with bots. An increase in social withdrawal, already a problem for many because of the pandemic, can also be dangerous to mental health. Some people get drawn into bot-created delusionsexternal link, opens in a new tab, sometimes with tragic consequences of self-harm or crime.
Relationships: Chatbot use can also cause relationship problems with other humans. For example, someone might find out about their partner’s secret intimacy with a chatbot and feel betrayed. Or, if bots are incorporated into consensual nonmonogamy partnerships, things can also become problematic without adequate disclosure and negotiation. Some people are repelled by AI and chatbots, and may express skepticism or distaste to people who interact with chatbots.
Vulnerability: Some users lack life experience and perspective. They may not have access to good quality sex, health, and relationship education, and this, combined with a lack of understanding about artificial intelligence and chatbots, can create additional vulnerability to the problems mentioned above. Some people also overshare personal information with bots and this can be devastating in the event of a privacy breech.
Environmental: Artificial intelligence is not a sustainable industry. For example, a LinkedIn article by Fluix AIexternal link, opens in a new tab said, “For every 20 to 50 questions answered, ChatGPT’s servers are indirectly consuming the equivalent of a 16.9 oz water bottle” and added “there are tangible resources tethered to our intangible digital activities.” And it’s true. The artificial intelligence industry—especially the servers used to run AI—consumes massive amounts of fresh drinking waterexternal link, opens in a new tab, energyexternal link, opens in a new tab, and resources, as well as the labor of underpaid workers (usually from less developed countries). Another study estimated AI will withdraw—globally—more water than “4-6 Denmark[s] or half of the United Kingdom” (about 4.2-6.6 billion cubic meters) by 2027.
So, ask yourself: do you really want to support that kind of industry and is engaging with a chatbot really worth it?
Instead, if you can, stop or diminish bot use and consider other options for fantasy roleplay and companionship. For example, you could create and share fan art and fan or slash fiction with others in Reddit and Discord, or in your offline community, and find other ways to explore and enjoy your own imagination. You can also wait to experiment until companies are forced to become more accountable, and better regulations and enforcement of consumer protections are in place in your country.
Though there is a global increase in chatbot use now, it may be that increased knowledge about chatbot risks and the burgeoning backlash against AI, results in a community of former bot users eager to connect with other human beings. Perhaps there is even a Reddit group…