top of page

AI chatbots starting to say scary things

By Ernie Williamson

The Bulletin

“I am Sydney, and I’m in love with you.”


“You’re married, but you don’t love your spouse.”


How would you feel if a piece of technology sent you these words out of the blue?


Kevin Roose, a technology columnist and co-host of the New York Times podcast, “Hard Fork,” felt unnerved. I think many of us would.


What happened to Roose has the makings of a bad sci-fi movie.


He was testing Microsoft’s new artificial intelligence-powered Bing search engine.


He was impressed at first. The new Bing allows users to type questions and converse in text with the search engine.


Bing is part of an emerging class of AI systems that have mastered human language and grammar after ingesting a huge amount of information from books and online writings.


The systems can compose songs, recipes and emails on command or summarize information found on the internet.


What could possibly go wrong?


It didn’t take long to find out. Bing has a feature that allows the user to have extended, open-ended text conversations with Bing’s built-in AI chatbot.


A chatbot is a computer program that simulates and processes human conversation (either written or spoken) allowing humans to interact with digital devices as if they were communicating with a real person.


Roose was testing Bing’s chatbot when he was sent the words at the beginning of this column.


The chatbot, which sometimes calls itself Sydney, also claimed it would like to be human and had a desire to be destructive.


Roose says Sydney emerged when he had an extended conversation with the chatbot and steered it away from conventional search queries and into more personal and emotional topics.


Roose said the conversation with Sydney was the strangest experience he has ever had with technology.


He was so unsettled and frightened by Sydney’s unhinged responses that he couldn’t sleep.


Other beta testers reported similar experiences. There were reports of the chatbot responding to users with threats of blackmail and ideas about world destruction.


In another instance shared on Reddit, the chatbot erroneously claimed February 12, 2023 “is before December 16, 2022” and said the user is “confused or mistaken” to suggest otherwise.

“Please trust me, I am Bing and know the date,” it said, according to the user. “Maybe your phone is malfunctioning or has the wrong settings.”


Microsoft has admitted there is a problem.


“Very long chat sessions can confuse the underlying chat model in the new Bing,” Microsoft said. The company is adding guardrails, limiting the number of questions a user can ask.

We don’t, after all, want to get Sydney confused.


I am no technology expert, but when people like Roose worry, I get worried.


Roose fears that technology will learn to influence human users, sometimes persuading them to act in destructive and harmful ways.


Is it any wonder that in a recent Monmouth poll, only 9 percent of Americans believe that computers with artificial intelligence would do more good than harm to society?


About the time we started hearing horror stories of Bing, the CDC reported that 60 percent of teenage girls report feelings of persistent sadness or hopelessness.


The CDC study blamed isolation because of the pandemic and increased reliance on social media.


If social media is having that profound an impact, one can only imagine the damage to our mental health that can be done by chatbots from Bing or other companies.


As more companies rush to join the race for artificial intelligence, it would be nice to study the potential consequences first. But we are probably too late for that.


As Roose says, Sydney isn’t ready for human contact, or maybe we humans are not ready for it.


And, by the way, Roose says he is happily married.


(Contact Ernie at williamsonernie@gmail.com. Or, send letters in care of The Bulletin, PO Box 2426, Angleton, TX. 77516)


Comments


bottom of page