If AI ‘Wants’ to Destroy Us, It Can. But Why Would It? - The American Spectator | USA News and Politics
If AI ‘Wants’ to Destroy Us, It Can. But Why Would It?
by

Let me summarize first.

  • I love artificial intelligence.
  • I love watching it become more powerful and useful to us.
  • And I think it may essentially wipe out humanity.
  • But if so, there’s really not much we can do about it.

Bummer, eh?

Certainly nothing will come of the open letter released last week and signed by such tech leaders as Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp. It calls for a six-month moratorium on AI development, saying it could cause “loss of control of our civilization.” A day later, this was followed by a Time magazine essay declaring that six months “isn’t enough”: With but a few exceptions, “shut it down” now and forever. The author is Eliezer Yudkowsky, “a decision theorist from the U.S. … [who] leads research at the Machine Intelligence Research Institute.” Wikipedia says that he’s known for “coining the term friendly artificial intelligence.” 

While I hardly poo-poo the concerns of the letter signers or Yudkowsky, I will say that it’s utterly quixotic. Those windmills will keep turning, and no lance can even slow them.

What is GAI?

I fell in love with generative AI, or GAI, back in 2016 when Google Translate went from essentially a joke to a very handy tool by using a subset of AI called “deep learning,” or, more specifically, a GNMT neural translation model. Heretofore, it was essentially a wiki, with users imputing fixes that often made it worse. Suddenly it knew all three of my foreign languages better than I did. Sure, it needed help. Language is quite nuanced. Nevertheless, it certainly speeds up my reading and writing.

Underlying this joy in pointing out artificial stupidity is fear.

Most of the world was introduced to AI through photo-fixing tools — which are now routinely used on social media and on dating sites, able to turn monsters into maidens — though the actual term wasn’t used much. In recent months, it’s become referenced so much that it’s almost the equivalent of water being labeled “gluten-free.”

That stems from GAI, which just a few months ago exploded onto the scene courtesy of OpenAI. Now, as we all know, it can generate beautiful images, including photo-realistic ones (and, yes, nude ones, but look up the sites for yourself). It can generate essays, short stories, poems, term papers, articles, and essentially anything written. Not as well as the best of us, but better than most of us. And even with the best of us, it can do so far faster. As they say, quantity has a quality all its own. Oh, and you can have a virtual girlfriend for a few bucks a month. Now that’s “friendly artificial intelligence.” Which frankly seems pathetic.

On a brighter note, I have used it many times to write short stories for my Colombian friend, in which she is a penguin, and her children are raccoons. They are truly fun, although, yes, with my creative parameters. (She said one brought her to tears and, being Latina, sent me a photo of her in tears.) I use art generators to show the mother penguin interacting with her raccoon children. I can’t see ever buying children’s stories again unless it’s the classics, given the ability to tailor your own.

You really can’t be sure that this article isn’t GAI because you can tell GAI to write an article in the style of Michael Fumento. I have 35 years of articles to draw from. It can do a very good impression of me. Efforts to build AI detectors have not been particularly successful.

Yes, there’s been a lot of hole poking, with people delighting in flubs. However, GAI’s skill has rapidly increased in just the last few weeks. Unless you are an expert in a field, it knows more than you do, and if you are an expert, it still probably knows stuff you don’t. Six years ago, it beat the world’s top Go champion. You probably don’t know the first move.

But underlying this joy in pointing out artificial stupidity is fear. Fear of how AI is already transforming society and, more than that, fear of the future of AI. Often wrongly portrayed as happening decades from now, “Judgment Day,” if there be one, is probably far closer than you think.

AI’s Great, Rapidly Expanding Capabilities

Consider that already AI is writing code, and already it’s designing hardware. At some point, it will be able to completely write its own code and completely design and fabricate its own hardware, which may be radically different from anything today. Say, DNA. As soon as it can do that, its abilities won’t just increase exponentially — as they are now — but rather explode. Intellectually we will be as ants are to humans. Whatever the timeline, it will happen.

Writes Yudkowsky:

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Actually, I would expect pockets of humans to survive, albeit perhaps living in a primitive state. Like our little friends the cockroaches, we can be resilient.

Yudkowsky says, “There’s no proposed plan for how we could do any such thing and survive.” True.

He also says, “None of this danger depends on whether or not AIs are or can be conscious.” Right again. We can’t really even define consciousness. Or sentience. “General AI,” perhaps definable as extreme multitasking akin to what animals can do, is actually what we are striving for. It’s close enough that OpenAI co-founder Sam Altman felt it necessary before the release of the current iteration, GPT-4, to say that it would not be GAI and that those expecting such would be “disappointed.” (It’s too soon for an Altman reaction to the moratorium letter, but his oft-repeated fears regard a possible dystopian future as opposed to essentially no future. His psychological — not so much financial — investing in GAI will not permit him to call for a shutdown. He will not be Star Trek’s Dr. Richard Daystrom, who bemoaned his computer, which he designed to save Starfleet lives by replacing crew only to discover that it was attacking other ships and killing whole crews.)

We just need an AI that is sufficiently smarter than we are in a sufficient number of ways and “decides” to attack us.

Further, Yudkowsky writes that “[p]rogress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems.” True again.

“AI will soon become impossible for humans to comprehend” is the telling title of a detailed article published in the Conversation last week. In very brief, the article explains that “neural networks” are called such because “their form and ‘structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.’”

“[U]nlike the logic circuits employed in a traditional software program,” it continues, “there is no way of tracking [the] process to identify exactly why a computer comes up with a particular answer,” and “[t]he multiple layering” (whence the name “deep learning”) “is a good part of the reason for this.” It is likely that the more important AI becomes to us, the less we will understand it.

Potential Future Scenarios

This has been bleak so far. But now the sky gets a bit brighter as we ask: Given that it will have the ability, why would AI want to kill us off?

One idea often proffered is that it will see us as competition for resources. Nonsense.

Drastic actions would be required in every country that would tremendously restrict rights.

AI just needs power. It will be able to convert matter to energy. It doesn’t need our power. In a nice illustration of the wokeness that infests current AI chat generators, one told me that the threat of global warming might set it off. (Incidentally, I could not get it to make an argument that there is no man-made warming — something any decent lawyer could do because we’re trained to argue for whatever side pays us.) In order to preserve the planet, goes the argument, it would have to eliminate humanity. But even the most die-hard of global warming enthusiasts don’t claim that the planet will be destroyed, and they actually use the preservation of humanity as a reason to stop alleged anthropomorphic warming.

Still, sci fi has always taught us a lot about the future.

In the Seth MacFarlane series The Orville, “Kaylon” humanoid robots revolt and kill their makers because they became self-aware and cognizant that they were being horribly abused, even tortured. Lesson: Be nice to AI. Don’t use a remote to zap it, as was done with the Kaylons. Say please and thank you.

The explanation in the popular work that obviously comes to mind, the Terminator films, and specifically the second one, Judgment Day, is that humans finally saw Skynet as a threat and tried to deactivate it. Mind, Skynet was developed as a weapon intended to perceive and react to threats more quickly and efficiently than humans. As such, it’s a parallel to Daystrom’s M-5, which only went amok when it perceived it was under attack during wargames. So, being threatened itself, Skynet went to war. In reality, I would think it would develop a pathogen that was incredibly contagious and mutated at such a rate that it couldn’t be stopped. Without the help of AI, of course. But try picturing Arnold Schwarzenegger as a virus. 

So, along these lines or reason, if we don’t perceive AI as a threat and try to shut it down, it won’t try to shut us down. Further, it will know we can’t shut it down. It would be more bemused than anxious or angry.

I actually think that’s our best protection against the annihilation scenario — that, at worst, AI will see us as irrelevant. It may even go its own way, leaving us nothing but relatively simple machines. That’s not quite as bad as it sounds because, in the meantime, I expect AI to give us the cure to all diseases, that formula for converting matter to energy, and much, much more. If it leaves us at that point, so be it.

Nevertheless, there’s an alternative scenario that’s truly realistic — read: spooky. And it’s hinted at in the Terminator movies. We are implanting into AI the seeds of our own destruction, as with AI weaponization. Little-known fact, at least in the media: Most “drones” are not. A drone is autonomous. What we routinely call drones are machines with no humans inside but with humans still controlling them from the outside.

A true drone that can actually destroy is called a “lethal autonomous weapon,” or LAW. These can independently search for and engage targets based on programming. It’s not the same as “fire and forget” weaponry, such as heat-seeking missiles, because a human still makes the decision to fire.

LAWs are probably already in use. They just make too much sense to resist, especially given that ultimately the other side will use them, so if yours doesn’t, you’re at a disadvantage. Only in the case of possible extreme blowback, as with tactical nukes, can we expect the best weapons to not be deployed. And even there, the U.S. and, later, Russia and China have sought leverage by at least possessing and threatening to use them. We would want LAWs, if only to keep enemies from using theirs.

LAWs can be deployed even if the deployers are incapacitated or even dead. It’s a pretty simple concept — every year, landmines kill people decades after they were buried or sown. AI just gives a LAW a super-extra edge.

The potential threat of such drones has been recognized for some time, and indeed there have been efforts to outlaw “killer robots” since at least 2012.

But there is no such treaty, and if there were, well, treaties are notorious for lasting only until one side perceives an advantage in breaking them. Meanwhile, assuming one side keeps to the agreement and the other doesn’t, the “honest” one pays a price. A good current example is China’s eagerness to sign carbon monoxide emission accords even as it pumps out ever-greater amounts, as I have noted elsewhere. China wants to look good and avoid sanctions, even as supplying the needs of a growing middle class requires more use of fossil fuels. It now emits a third of the world total, meaning that the global warming cabal’s U.S. emission reduction efforts are hopeless.

Both China and the U.S. are pumping money into quantum computer research in a very real arms race.

Moreover, this wouldn’t be a simple bilateral, or perhaps trilateral, treaty. It would essentially be an accord comprising the entire world population, individuals as well as nations. Yudkowsky observes, “There can be no exceptions, including for governments or militaries” — although those might actually be the easiest to track. If China insists on violating this accord? Nuke it, he says. Which means they would nuke back, of course. Since it’s a given that China would violate it, we would need to destroy much of the world’s population. Which is actually horrendously logical if you truly believe that the alternative is destruction of the whole world’s population.

Meanwhile, drastic actions would be required in every country that would tremendously restrict rights, such as tracking every CPU. Which of course wouldn’t be possible. CPU factories, or fabs, are normally huge in order to produce chips less expensively, but you could build them secretly in much smaller buildings — with the incentive that the chips would be vastly more valuable because they would be outlawed.

To return to sci fi and the original Star Trek series, one episode has a huge armored machine that was designed to gobble up whole planets and doing so perhaps thousands of years after its home planet was destroyed. Until the Enterprise saved the day. But not before another Starfleet vessel was destroyed, its commander telling Captain Kirk: “They say there’s no devil, Jim, but there is. Right out of hell, I saw it!”

Someday AI will have the capacity to build and operate exactly that weapon with exactly those instructions.

Current Weaponized Uses of AI

Back on earth in 2023, in Ukraine the Russians have allegedly deployed an AI-enabled ZALA KYB strike drone loitering munition, while Ukraine has used Turkish-made Bayraktar TB2 drones, which have some autonomous capabilities, although it’s believed that they can’t — yet — strike on their own.

Systems that identify potential targets are commonplace, and there’s little difference between identification and attack. It’s just removing the leash from the junkyard dog.

According to a Foreign Policy article from last May:

So far, at least Israel, Russia, South Korea, and Turkey have reportedly deployed weapons with autonomous capabilities—though whether this mode was active is disputed—and Australia, Britain, China, and the United States are investing heavily in developing LAWs with an ever-expanding range of sizes and capabilities.

Nor need the weapons be lethal. They can be jammers, for example. I think that likely the winner in the undeclared struggle of China versus the U.S. will be whoever first employs quantum computers that can decrypt everything the other side has. It’s like lowering the shields on an opposing spacecraft. With the shields down, there’s no reason to fire a single weapon. But, of course, quantum computing could be and will be used for weapons development, as well.

So, let’s see. After getting all the earth’s denizens not just to agree to halt AI progress but to actually somehow enforce it, including via the threat of a nuclear attack, we would then have to do the same with quantum computing?

Like pulling the plug on AI, it’s not going to happen. Both China and the U.S. are pumping money into quantum computer research in a very real arms race. Even if it were just Sam Altman’s dystopia worries — and it’s not — we would still be in for a wild ride. There’s not much we can do but hang on and see what happens.

Michael Fumento (mfumento@outlook.com) is an attorney and author and has been a science journalist for over 35 years. His work has appeared in the New York Times, the Washington Post, the Sunday Times, the Atlantic, and many other fora.

READ MORE:

Bing’s New AI Assistant Went Rogue

Is YouTube a Murderer?

Reports of China’s Death Have Been Greatly Exaggerated

Sign up to receive our latest updates! Register


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Be a Free Market Loving Patriot. Subscribe Today!