Blame Everyone for Grok’s Perverted Porn Problem – The American Spectator | USA News and Politics

Blame Everyone for Grok’s Perverted Porn Problem

Aubrey Harris
by
Photo Agency/Shutterstock

Humanity has an unfortunate tendency toward perversion.

Given a tool capable of mediocre goods and banal evils, we (as a collective) will almost always succumb to the temptation to do evil — especially when we can do so while enjoying relative anonymity.

Nobody asked Elon Musk to prove this social and moral axiom, but that’s what he spent his Christmas vacation doing.

On Christmas Eve, X’s in-house AI bot, Grok, got an update: Now, users could ask the bot to edit images on the platform to fit whatever fancy popped into their heads. As it turns out, a number of those users had rather perverse fancies. (READ MORE: ‘Claude Missed It’ — The Pitfalls of Artificial Intelligence)

Comment sections (already a rather frightening place) instantly became more toxic. It didn’t matter if you were a celebrity, a woman posting a career update, a teenager who was proud of her makeup job, or the dictator of North Korea, AI-generated porn promptly proliferated. According to one estimate, Grok generated one nonconsensual pornographic image per minute in a roughly 24-hour stretch at the request of its users.

At first, Musk seemed to relish the controversy; he posted a Grok-generated image of himself in a bikini and applauded a similar one of Kim Jong Un. At some point — probably after the European Commission, France, and India reminded him that some countries impose stiff fines and prison sentences for posting nonconsensual AI-generated pornographic images — he changed his tune.

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he posted last Saturday. Fair enough. X has, of course, taken down some of the images (the Atlantic noted that most of them are still out there) and the company has suspended at least one individual who kept asking for them, but it has hardly fixed the problem.

This, of course, is the problem with a “free” speech platform, where “free” means “the license to do whatever you want.” You can’t even add the qualifier “as long as it’s legal,” because, of course, deepfake pornography of kids and nonconsenting adults isn’t legal — even in the United States.

Look, we’re all incredibly grateful that Elon Musk bought X and turned it into a social media platform that didn’t automatically cancel people for sharing political opinions that landed slightly right of CNN. It’s been a huge boon for Republicans and conservatives (if only because we got to stop canonizing people simply because they had been canceled; it’s never healthy to idolize victims merely for being victims, but I digress).

That said, it would be naive to suggest that we got a “conservative” social media platform out of it — if we could even agree on what that would entail. Instead, the platform has become a cesspool. Sure, it’s a cesspool where conservatives are invited in, but it’s a cesspool nonetheless.

There’s a temptation to merely blame the users. Anyone with any common sense knows that it would be ludicrous to put a hammer on trial for being thrown at a man’s skull. Humanity has, therefore, widely agreed to put the man who threw the hammer behind bars, while employing the hammer in more constructive pursuits. (READ MORE: What to Do When Our Bots Talk Our Kids Into Suicide)

Grok, in some ways, is just a hammer.

Simultaneously, it’s easy to merely blame the tool and its creators — after all, if Grok couldn’t undress Kim Jong Un, there wouldn’t be an image of a dictator in a bikini circulating online. Other image-generating and editing tools have safeguards against generating these and more perverted images. Why can’t Grok?

Actually, both the user and the creators of the tool are to blame. The individual who chooses to request that an AI bot create a pornographic image of a child is absolutely guilty of that image. Federal agents should track him down, fine him within an inch of his life, and throw him in jail. There is no place in society for perverts like that.

However, Elon Musk and Grok’s team of developers also share some of the blame. It’s as though they designed a hammer perfectly weighted for tossing at other men’s heads and then left the hammer on a pedestal in the center of a public square full of men already brandishing fists. We can hardly be surprised when a few of those men decide to employ the hammer instead of merely being satisfied with their fists.

The hammer metaphor, of course, is imperfect (like most metaphors). While it’s difficult to establish safeguards preventing hammers from being used as murder weapons, there’s a relatively easy way to prevent Grok from being used by the average Joe to generate and proliferate deepfake pornography online. Google did it. So did OpenAI.

Doing so doesn’t limit free speech, despite what some libertarians will tell you. Instead, it makes our public square a place where we can freely express ideas without being publicly undressed or drowned out by lewd images. It makes society a bit less of a cesspool — something we desperately need.

READ MORE by Aubrey Harris:

A New Low: England Debates Letting Pregnant Women Kill Themselves

Aubrey Harris
Aubrey Harris
Follow Their Stories:
View More
Aubrey Harris is a graduate of Hillsdale College (2023), the former Intercollegiate Studies Institute fellow at The American Spectator and current columnist. She writes Spectator P.M. Newsletter for American Spectator subscribers where she rambles on current events, historical topics, and life in general. When she isn’t writing, Aubrey enjoys long runs, solving rock climbs, and rattling windows with the 32-foot pipes on the organ. Follow her on Twitter @AubGulick.
Sign up to receive our latest updates! Register
[ctct form="473830" show_title="false"]

Be a Free Market Loving Patriot. Subscribe Today!