Forty Years of the Computer Revolution - The American Spectator | USA News and Politics
Forty Years of the Computer Revolution
by
Fer Gregory/Shutterstock.com

Let’s start with a story. Just a few years ago, when I first turned to think seriously about the changes wrought by 40 years of the computer revolution, I found myself in a meeting with some senior figures in cybertechnology: directors of computer labs, explorers of artificial intelligence, people with their fingers on the pulse, or maybe the throat, of the future.

This article was originally published in the American Spectator print magazine. Click here for online access!

In the chit-chat before the meeting began, I mentioned a British study claiming that “many young adults spend a third of their waking lives” on electronic devices. These young people, ages 16 to 22, check their phones an average of 85 times a day — even while they already have their devices set to beep or vibrate to notify them when they have a new message.

In response, one of those senior technology sophisticates I was sitting beside, a director of a computer lab, said this study might as well have been done with his own children. His kids constantly consult their phones and tablets, he said. Like smokers who have to sneak out for a cigarette in the middle of a meal, they can’t even make it through a restaurant dinner without withdrawing from conversation to look for messages. They even get grumpy, he said, if someone tries to stop them.

You know just what he meant. Other recent studies have pointed out that people in constant electronic communication become disturbed, set off-kilter, when cut off from their computerized contact. They display small signs of irritability, anxiety, confusion, and even existential dread: a general feeling of unfocused threat and displacement. These are, I pointed out, the classic symptoms of the psychological category of disease known as addiction. Those kids are addicted to their electronic devices.

Then, the computer-lab director, one of the masters of our future, said something that struck me as both profound and telling about where we are culturally at this moment. Here he was: a significant figure with weight behind his opinions, a man presumably trained by graduate school to thoughtful articulation. When I suggested that his children were displaying the classic symptoms of addiction, he answered, “So what?” In other words, so what if kids are addicted to this useful technology? What difference does it make? Why should anyone spend the least division of an hour worrying about such stuff?

Recent studies have pointed out that people in constant electronic communication become disturbed, set off-kilter, when cut off from their computerized contact.

There are, of course, several ways we could take this. Perhaps he was just being morally obtuse and didn’t particularly care about his children — although that seems unlikely. Or perhaps he was just looking for a way to close down the conversation, though it did, in fact, go on for a while, until the actual meeting finally started and the person running it glared us all into silence.

But let’s suppose for a moment that this senior figure in the computer world genuinely meant what he said: The sheer fact of computer addiction is not, and ought not to be, worrying or bothersome. That is an amazing thought — a proposition that reaches back toward a fascinating set of necessary prior assumptions and reaches out toward an equally fascinating set of consequences and implications.

As it happens, our computer expert is wrong, just on the facts. New studies of psychological damage from relentless connectivity are started to line up with the anecdotal evidence we’ve all seen. The cross-generational data are not yet complete and won’t be complete until the damage has been suffered for years. But we’ve got enough to suggest that — in the aggregate — clear psychological deficits are resulting from our machine-enabled interconnectedness.

We have an infantilizing of affect, for example, as you might guess if you have seen young people forced to be around adults. And that regression in affect is matched, as it must be, by a reduction of social skills as verbal proficiency atrophies when personal conversation is truncated in online interactions. For that matter, we have dangerously indulged fantasies in pornography, online posing, and role-playing.

Meanwhile, we have a fetishizing of commercial commodities beyond even what television did to prior generations. We have a devaluation of actual life as social media consumers imagine the constantly Instagrammed lives of others to be better than their own lives. Worse psychologically is the parallel (and well-documented) increase in body hatred and dysmorphia since the World Wide Web began on August 6, 1991. And through it all, we see an overvaluation of the esteem of others as expressed through social media.

The neural mechanisms of attention, the pathways by which the brain forms habits of focus, are not well understood. The problem is that the growth of the tech giants has made that lack of understanding entirely beside the point. Economic competition remains the greatest motor, the most dynamic device, for solving a puzzle the world has never known before. In its essence, the web is an economic competition for attracting attention.

Black-box neural networks, the most sophisticated forms of AI, solve problems more or less by not solving them. They use a kind of brute-force correction of pattern recognition, growing ever more precise, that bypasses understanding and forges toward the most exact practical answer. In exactly the same way, the web acts as a brute-force, self-correcting network for arriving at the most efficient and successful ways to forge the neural pathways of attraction. We don’t know how it works. We just know that it works.

And it’s the young who are the easiest audience, since one of the things we do understand is that the neural pathways in the brain are not well formed until late adolescence. No wonder Steve Jobs and a surprising number of other seminal figures in the computer revolution limited screen time for their children. They wanted to protect their own families from the devices they were becoming wealthy by producing.

The poor, far more than the rich, are living in a technological cage.

The dangers of online life typically fall on the poor. Children of impoverished families end up using technology more than children from wealthier families in America. White children average less than nine hours a day staring at a television, computer, or cellphone screen. That’s terrifying, but black and Hispanic children average 13 hours a day. We have a digital divide between rich and poor — but the divide turns out to be caused by the fact that the rich can afford activities that take their children away from digital screens. The poor, far more than the rich, are living in a technological cage.

Even more than money, family matters — a point made strongly in Naomi Schaefer Riley’s recent book, Be the Parent, Please (2018). Digital addiction puts at greatest risk those who have weak parental oversight, especially from absent fathers and unmonitored web access. In The Cyber Effect (2017) — as dismal an examination of the American condition as a reader is likely to find — Mary Aiken observes that “if you spend time online, you are likely to encounter a far greater variety of human behavior than you have before — from the vulnerable to the criminal, from the gleeful and altruistic to the dark and murderous.” Even a few decades ago, to find sadomasochism required that one “dig around in the public library for a copy of the Marquis de Sade’s writings or go to an art-house cinema.”

At its most basic, the internet has made public access to the violent recesses of the human mind so easy that we have ended up normalizing what St. John Henry Newman once described as the stained imagination. And every teenage boy can spend hours watching it. Is it really much surprise, then, when the psychologically weakest and most confused are drawn into evil?

Let’s leave aside the fact that our senior computer sophisticate, our important director of a computer lab, was simply wrong on the facts: Interconnectivity does matter psychologically and socially, and the effects are not neutral. Let’s concentrate instead on what it means to say So what? — as though addiction doesn’t matter.

We used to think that addiction was bad in itself because it was a derogation of the human. Human being, our existence as embodied beings, has at its best a shape that is not reached by the alcoholic, the heroin addict, the chronic onanist, the psychotically obsessed. Addiction was considered a flaw in what ought to be a fully realized adult — a grown-up: noble in reason, infinite in faculty … the beauty of the world, the paragon of animals. The mainstreaming of addiction through technology is a small portion, a telling example, of the general diminishment of the human. For the sake of their souls, get your children offline.

For that matter, get yourself offline. The joy of the internet is that it allows like-minded people to find one another. And the horror of the internet comes from exactly the same source: It allows like-minded people to find one another.

When the web first emerged, baseball-card collectors, used-book buyers, and knitting enthusiasts could suddenly share news. One fascinating effect of this was financial transparency. Everyone who finds an old coin can now find out its worth. No one stumbles on an impossibly great buy in a record store because the seller now knows the average price across the nation. A national transparency of markets resulted from the ability of like-minded people to find one another.

Of course, the same process made it possible for neo-Nazis and child molesters to find one another. Perhaps even worse for society, the internet allowed things like 4chan — a racist and sexist chat group. Interestingly, as Dale Beran notes in his recent book, It Came From Something Awful (2019), the people involved were young men who, in the early 2000s, found themselves underemployed. Clever in a jokey way, with hours a day to spend online, they began creating for one another comic memes that twitted the culture. And since the only culture they had ever been taught was liberal, they made fun of liberalism’s sanctimonious platitudes by playing at being sexists and racists.

Not that they actually believed in sexism and racism. They were too nihilistic to believe in anything, really. They played with those evils simply because they could get a rise out of people that way. Decades earlier, they wouldn’t have mattered. In the internet age, they discovered they weren’t alone. And together they managed to push out into the world something vile.

But even the mainstream social media sites, from Facebook to Instagram, have something vile about them. Reddit, Snapchat, Twitter: The form hardly matters. All of them encourage something strange in human interaction. It’s a kind of personal impersonality, in which we find ourselves willing to say astonishingly unkind things to one another. And Facebook — to take only the most obvious of the near-monopolistic online companies — managed to commercialize the result. Think about that for a moment. Facebook provides nothing but a forum. It publishes nothing (which makes it safe from lawsuits under Section 230 of the Communications Decency Act). It manufactures nothing. It builds nothing. Facebook grew enormously powerful just … by existing.

Like many of the technological gatekeepers of our national conversation, it has also increasingly been willing to use that power to tilt in favor of preferred positions in sociopolitics. We’ve had near-monopolies before — all those trusts that Teddy Roosevelt thought needed breaking. But we’ve never had a trust like Facebook, exerting wide control on free speech itself.

And yet, what if the social media sites were neutral, as they ought to be? What if we broke up the technological giants, as we probably should? And somehow reinstated a national politeness that could keep online commentators from spewing bile at one another? And eliminated the techniques discovered for profiting from the private information we seem determined to give away?

Still, the neural effects of the computer age are not what we thought they would be as we rushed into what is now 40 years of the Computer Age. Remember the old computer program Tetris — a game of falling blocks that you had to turn to make them fall into place? Remember how even after you stopped playing at night and settled down to sleep, you could see the blocks still falling in your mind’s eye?

Well, that’s what everything about computerization has done to us. We cannot escape it. The beneficial effects are too great to lose, the technological advancement too rapid to legislate against, and the sociological changes too complete even to imagine going back. But let’s not pretend they came at no cost, as though computers were some unalloyed good. They changed us. And as artificial intelligence grows, as we gain the implanting of chips beneath the skin and digital manipulation of the brain, they will change us more.

This article was originally published in The American Spectator’s fall 2019 print magazine. Click here  for online access!

Sign up to receive our latest updates! Register


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Be a Free Market Loving Patriot. Subscribe Today!