Your ChatGPT Relationship Status Shouldn’t Be Complicated

The way people talk to each other is influenced by their social roles. But ChatGPT is blurring the lines of communication.
Photo collage of a pixelated person emojis and a talking mouth
Photo-illustration: WIRED Staff; Getty Images

The technology behind ChatGPT has been around for several years without drawing much notice. It was the addition of a chatbot interface that made it so popular. In other words, it wasn’t a development in AI per se but a change in how the AI interacted with people that captured the world’s attention.

Very quickly, people started thinking about ChatGPT as an autonomous social entity. This is not surprising. As early as 1996, Byron Reeves and Clifford Nass looked at the personal computers of their time and found that “equating mediated and real life is neither rare nor unreasonable. It is very common, it is easy to foster, it does not depend on fancy media equipment, and thinking will not make it go away. In other words, people’s fundamental expectation from technology is that it behaves and interacts like a human being, even when they know it is “only a computer.” Sherry Turkle, an MIT professor who has studied AI agents and robots since the 1990s, stresses the same point and claims that lifelike forms of communication, such as body language and verbal cues, “push our Darwinian buttons”—they have the ability to make us experience technology as social, even if we understand rationally that it is not.

If these scholars saw the social potential—and risk—in decades-old computer interfaces, it’s reasonable to assume that ChatGPT can also have a similar, and probably stronger, effect. It uses first-person language, retains context, and provides answers in a compelling, confident, and conversational style. Bing’s implementation of ChatGPT even uses emojis. This is quite a step up on the social ladder from the more technical output one would get from searching, say, Google. 

Critics of ChatGPT have focused on the harms that its outputs can cause, like misinformation and hateful content. But there are also risks in the mere choice of a social conversational style and in the AI’s attempt to emulate people as closely as possible. 

The Risks of Social Interfaces

New York Times reporter Kevin Roose got caught up in a two-hour conversation with Bing’s chatbot that ended in the chatbot’s declaration of love, even though Roose repeatedly asked it to stop. This kind of emotional manipulation would be even more harmful for vulnerable groups, such as teenagers or people who have experienced harassment. This can be highly disturbing for the user, and using human terminology and emotion signals, like emojis, is also a form of emotional deception. A language model like ChatGPT does not have emotions. It does not laugh or cry. It actually doesn’t even understand the meaning of such actions.

Emotional deception in AI agents is not only morally problematic; their design, which resembles humans, can also make such agents more persuasive. Technology that acts in humanlike ways is likely to persuade people to act, even when requests are irrational, made by a faulty AI agent, and in emergency situations. Their persuasiveness is dangerous because companies can use them in a way that is unwanted or even unknown to users, from convincing them to buy products to influencing their political views.

As a result, some have taken a step back. Robot design researchers, for example, have promoted a non-humanlike approach as a way to lower people’s expectations for social interaction. They suggest alternative designs that do not replicate people’s ways of interacting, thus setting more appropriate expectations from a piece of technology. 

Defining Rules 

Some of the risks of social interactions with chatbots can be addressed by designing clear social roles and boundaries for them. Humans choose and switch roles all the time. The same person can move back and forth between their roles as parent, employee, or sibling. Based on the switch from one role to another, the context and the expected boundaries of interaction change too. You wouldn’t use the same language when talking to your child as you would in chatting with a coworker.

In contrast, ChatGPT exists in a social vacuum. Although there are some red lines it tries not to cross, it doesn’t have a clear social role or expertise. It doesn’t have a specific goal or a predefined intent, either. Perhaps this was a conscious choice by OpenAI, the creators of ChatGPT, to promote a multitude of uses or a do-it-all entity. More likely, it was just a lack of understanding of the social reach of conversational agents. Whatever the reason, this open-endedness sets the stage for extreme and risky interactions. Conversation could go any route, and the AI could take on any social role, from efficient email assistant to obsessive lover.

My own research has shown the importance of clear social roles and boundaries for social AI. For example, in a study with my colleague Samantha Reig, we found that AI agents that tried to fill multiple roles that were vastly different from each other (say, providing users with a beauty service and later giving them health advice) lowered people’s trust in the system and made them skeptical of its reliability. 

In contrast, by studying families with teenagers that used conversational AI, we found that the AI agent needs to clearly communicate its affiliation—who does the agent answer to, the teenager or the parents?—in order to gain the trust of users and be useful to them. When families didn’t have that information, they found it difficult to anticipate how the system would act and were less likely to give the AI agent personal information. For example, teenagers were concerned that agents would share more with their parents than they would have liked, which made them more hesitant to use it. Having an AI agent’s role clearly defined as affiliated with the teen, and not their parents, would make the technology more predictable and trustworthy. 

Assigning a social role to an AI agent is a useful way to think about designing interactions with a chatbot, and it would help overcome some of these issues. If a child has an AI tutor, its language model should align with this role. Specific boundaries could be defined by the educator, who would adjust them to educational goals and classroom norms. For instance, the tutor may be allowed to ask guiding questions but not give answers; it could provide assistance with improper grammar but not write entire texts. The focus of conversation would be on the educational material and would avoid profanities, politics, and sexual language. 

But if the agent was in a confidant role for this child, we might expect different guardrails. The constraints might be more broadly defined, giving more responsibility to the child. Perhaps there would be more room for playful interactions and responses. Still, some boundaries should be set around age-appropriate language and content, and guarding the child’s physical and mental health. Social contexts are also not limited to one-human/one-agent interactions. 

Once we acknowledge that agents need social roles and boundaries, we need to accept that AI enters a complex social fabric in which multiple stakeholders can have diverging and even conflicting values. In the AI tutor example, the goals of the educator may be different from those of the child, their parents, or the school’s principal. The educator may want the student to get stuck in a productive way, while the parents may prioritize high performance. The principal, on the other hand, might be more concerned with average class outcomes and tutor costs. This kind of constraint-centered thinking is not just about limiting the system, it is also about guiding the user. Knowing an AI’s social role and context can shape user expectations and impact the types of questions and requests raised to the AI in the first place. Therefore, having boundaries on expectations too can help set the stage for safer and more productive interactions. 

A Path Forward

How can companies begin to adopt social constraints in the design of AI agents? One small example is a feature that OpenAI introduced when launching GPT4. The new demo has a “System” input field in its interface, giving users an option to add high-level guides and context into the conversation—or as this article suggests, a social role and interaction boundaries. This is a good start but not enough, as OpenAI is not transparent about how that input changes the AI’s responses. Also, the System field is not necessarily concerned with the social aspects of the AI’s  role in interactions with users.

A well-defined social context can help structure the social boundaries that we are interested in as a society. It can help companies provide a clear framing of what their AI is designed for, and avoid roles that we deem inappropriate or harmful for AI to fill. It can also allow researchers and auditors to keep track of how conversational technology is being used, and the risks it poses, including those we may not be aware of yet. Without these constraints and with a thoughtless attempt to create an omniscient AI without a specific social role, its effects can quickly spiral out of control.