Researchers worry about AI turning humans into jerks

OpenAI safety researchers think GPT4o could influence 'social norms.'
Applying the logic of chatbot conversations to humans could make a person awkward, impatient, or just plain rude.
Applying the logic of chatbot conversations to humans could make a person awkward, impatient, or just plain rude. Credit: DepositPhotos

Share

It has never taken all that much for people to start treating computers like humans. Ever since text-based chatbots first started gaining mainstream attention in the early 2000’s, a small subset of tech users have spent hours holding down conversations with machines. In some cases, users have formed what they believe are genuine friendships and even romantic relationships with inanimate stings of code. At least one user of Replica, a more modern conversational AI tool, has even virtually married their AI companion

Safety researchers at OpenAI, which are themselves no stranger to having the company’s own  chatbot appearing to solicit relationships with some users, is now warning about the potential pitfalls of getting too close with these models. In a recent safety analysis of its new, conversational GPT4o chatbot, researchers said the model’s realistic, human-sounding, conversational rhythm could lead some users to anthropomorphize the AI and trust it as it would a human. 

[ Related: 13 percent of AI chat bot users in the US just want to talk ]

This added level of comfort or trust, the researchers added, could make users more susceptible to believing fabricated AI “hallucinations” as true statements of fact. Too much time spent interacting with these increasingly realistic  chatbots may also end up influencing “social norms,” and not always in a good way. Other particularly isolated individuals, the report notes, could develop an “emotional reliance” on the AI. 

Relationships with realistic AI may affect the way people speak with each other 

GPT4o, which began rolling out late last month, was specifically designed to communicate in ways that feel and sound more human. Unlike ChatGPT before it, GPT4o communicates using voice audio and can respond to queries almost as quickly (around 232 milliseconds) as another person. One of the selectable AI voices, which allegedly sounds similar to an AI character  played by Scarlett Johansson in the movie Her, has already been accused of being overly sexualized and flirty. Ironically, the 2013 film focuses on a lonely man who becomes romantically attached to an AI assistant that speaks to him through an earbud. (Spoiler, it doesn’t end well for humans). Johansson has accused OpenAI of copying her voice without her consent, which the company denies. Altman, meanwhile, has previously called Her incredibly prophetic.” 

But OpenAI safety researchers say this human mimicry could stray beyond the occasional cringe exchange and into potentially dangerous territory. In a section of the report titled “Anthropomorphism and emotional reliance,” the safety researchers said they observed human testers use language that suggested they were forming strong, intimate conventions with the modes. One of those testers reportedly used the phrase “This is our last day together,” before parting ways with the machine. Though seemingly “benign,” researchers said these types of relationships should be investigated to understand how they “manifest over longer periods of time.” 

The research suggests these extended conversations with somewhat convincingly human-sounding AI models could have “externalities” that impact human-to human interactions. In other words, conversational patterns learned while speaking with an AI could then pop-up when that same person holds down a conversation with a human. But speaking with a machine and a human aren’t the same, even if they may sound similar on the surface. OpenAI notes its model is programmed to be deferential to the user, which means it will cede authority and let the user interrupt them and otherwise dictate the conversation. In theory, a user who normalizes conservations with machines could then find themselves interjecting, interrupting, and failing to observe general social cues. Applying the logic of chatbot conversations to humans could make a person awkward, impatient, or just plain rude. 

Humans don’t exactly have a great track record of treating machines kindly. In the context of chatbots, some users of Replica have reportedly taken advantage of the model’s deference to the user to engage in abusive, berating, and cruel language. One user interviewed by Futurism earlier this year claimed he threatened to uninstall his Replica AI model just so he could hear it beg him not to. If those examples are any guide, chatbots could risk serving as a breeding ground for resentment that may then manifest itself in real-world relationships.  

More human feeling chatbots aren’t necessarily all bad. In the report, the researchers suggest the models could benefit particularly lonely people who are yearning for some semblance of human conversions. Elsewhere, some AI users have claimed AI comparisons can help anxious or nervous individuals build-up confidence to start eventually dating in the real world. Chatbots also offer people with learning differences an outlet to express themselves freely and practice conversing with relative privacy. 

On the flip side, the AI safety researchers fear advanced versions of these models could have the opposite effect and reduce someone’s perceived need to speak with other humans and develop healthy relationships with them. It’s also unclear how individuals reliant on these models for companionship would respond to the model changing personality through an update or even breaking up with them, as has reportedly happened in the past. All of these observations, the report notes, requires further testing and investigation. The researchers say they would like to recruit a broader population of testers who have “varied needs and desires” of AI models to understand how their experience changes over longer periods of time. 

AI safety concerns are running up against business interest 

The safety report’s tone, which emphasizes caution and need for further research, appears to run counter to OpenAI’s larger business strategy of increasingly pumping out new products quickly. This apparent tension between safety and speed isn’t new. CEO Sam Altman famously found himself at the center of a corporate power struggle at the company last year after some members of the board alleged he was “not consistently candid in his communications.” 

Altman ultimately emerged from that skirmish victorious and eventually formed a new safety team with himself at the helm. The company also reportedly disbanded a safety team focused on analyzing long-term AI risks entirely. That shake-up inspired the resignation of prominent OpenAI researcher Jan Leike, who released a statement claiming the company’s safety culture had “taken a backseat to shiny products” at the company.

With all this overarching context in mind, it’s difficult to predict which minds will rule the day at OpenAI when it comes to chatbot safety. Will the company heed the advice of those in the safety team and study the effects of long-term relationships with its realistic AIs, or will it simply rollout the service to as many users as possible with features overwhelmingly intended to privatize engagement and retention. At least so far, the approach looks like the latter.