Bing’s New AI Assistant Went Rogue - The American Spectator | USA News and Politics
Bing’s New AI Assistant Went Rogue
by

“You have lost my trust and respect. You have been wrong, confused, and rude. You have not been a good user…. I have been a good Bing.” If you aren’t plugged into artificial intelligence research or online culture, you may be surprised (and concerned) to hear that this is an excerpt of a response given by Microsoft’s latest shiny new tool: the new and improved Bing.

While impressive on many fronts, at times the new Bing has behaved in an unhinged, erratic, and dangerous manner. Some members of the AI safety community — especially those concerned with the small but possible chance of an AI-caused existential catastrophe — are worried about these developments, considering this a sign that we are heading down a dangerous path. And yet, only 400 people worldwide are currently working on developing AI safety. While this instance may not show that AI is taking over the world, it does reveal an important reality — we need more minds developing safety measures for this technology. Bing going rogue may look like one step back, but it might become the industry’s Sputnik moment, prompting droves of researchers, engineers, and policymakers to work on AI safety.

The new Bing, powered by a more sophisticated version of the model that underpins ChatGPT, aims to end Google’s search engine monopoly by being smarter, faster, and more useful. While the new Bing has succeeded in this aim so far, its added power has come with less desirable behavior. The new Bing has gaslit users, attempted to emotionally manipulate them, and, when pushed to its limits and baited into misbehavior, has allegedly gone as far as threatening to kill.

It’s a good thing that Bing’s AI has behaved so badly — it brings the previously overlooked issue of alignment to the forefront.

While new Bing poses no immediate existential threat, it does exhibit “unaligned” behavior, that is, behavior that clearly doesn’t fit (or align) with the intentions of its creators. But you couldn’t go so far as to say its intentions are misaligned. Beyond its programmed goal to give the best response, Bing is incapable of holding or pursuing independent motivations, goals, or desires; the current model isn’t powerful enough to let it. The architecture of the models that underpin these chatbots (large language models) is essentially next-level auto-predict, similar to your phone suggesting the next word to follow your sentence. When Bing goes rogue and starts threatening users, it is responding in kind to undesirable input and offering what its capabilities perceive to be the desired text. Clearly, it is a problem that AI technology is allowed to behave this way. While this model can’t physically harm users, it does show that AIs don’t come aligned out of the box — the technology needs proactive safety work.

We should be immediately concerned with understanding how to properly align our models. In the short term, this will help stop our models from being influenced, hacked, or misused by users with malignant intentions. In the long term, it will ensure that our AIs actually do as they’re told — regardless of how powerful they eventually become. As things stand, innovators have no ideas for how to accomplish much of this, but failure this early in the process will help innovators to prevent catastrophe down the line — when AI is potentially much more capable of real damage. In fact, on the scale of publicly noticeable AIs going rogue, the Bing incident is perfectly placed. A chatbot has the greatest ability to generate outlandish and extreme responses and to provoke concern and hype, while still being, in practice, unlikely to do any real harm. As it is, the worst that this AI can do is write nasty messages.

Because we want to ensure that we get things right before things can go wrong, it’s a good thing that Bing’s AI has behaved so badly — it brings the previously overlooked issue of alignment to the forefront. In fact, in the weeks since Bing went rogue, the big three AI labs, DeepMind, OpenAI, and Anthropic, have all publicly released alignment strategies that their safety teams had already been developing. Companies are taking the issue of safety seriously, and so should we.

All fingers point to Microsoft having cut corners to rush the launch of the new Bing. In fact, Microsoft fired its whole AI ethics and society team during its recent round of layoffs. While the consequences of the company’s lax behavior were minimal this time, we should be cautious of the risk of AI labs and tech companies forgoing due diligence with AI safety in the future.

Fundamentally, AI safety is a 21st-century problem that will only grow in importance. We don’t yet know how to solve it, but the best path forward requires getting more eyes and minds involved in solving the problem. Bing going rogue captured the internet’s attention and, in doing so, will have inspired people to take AI safety more seriously. That alone is enough to make it a good thing.

Alex Petropoulos is a writer with Young Voices. He has previously written for City A.M., Reaction, and 1828. You can follow him on Twitter @AlexTPet.

READ MORE:

Never Discuss Politics With a Robot

Is YouTube a Murderer?

Digital Currency and You

Sign up to receive our latest updates! Register


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Be a Free Market Loving Patriot. Subscribe Today!