The Department of Justice recently sued Google for allegedly monopolizing the market for search engines. The Department’s complaint alleges that Google took numerous actions well before 2010 that formed part of the claimed antitrust violations.
I have no comment about the merits. What I do want to call attention to, however, are the dates: a lawsuit beginning in 2020 to try to correct the market consequences of actions that began more than 10 years ago.
The revolution that some scholars call “regulating by robot” is already underway.
The courts could take five years or more to resolve the issues and, by then, everything could have changed. The U.S. sued IBM in 1969 for supposedly monopolizing the market for computers.
When the U.S. finally dropped the case 13 years later in 1982, IBM’s market share had fallen to only 37 percent.
Even if the U.S. wins its case against Google, it could be years from now before any remedy is implemented, and there could be significant challenges to creating an effective remedy at that time. Our traditional legal tools for regulating the economy can be just too darn slow to keep up with a rapidly changing economy. By the time they finally get there, the situation often has changed. While the law dawdles, the economy moves on.
The problem that traditional regulatory tools work at a snail’s pace is not limited to anti-trust law. A few years ago, I wondered why law was more successful in addressing environmental problems than it was in regulating energy supplies, so I taught energy law to try to figure it out. I concluded that environmental problems generally stand still or get worse over a decade or more, whereas energy markets can change in months, and traditional regulation just can’t keep pace:
[P]olicies designed to manage energy supply, regardless of political outlook, lag as much as a decade or two behind the times. For example, Nixon’s 1971 oil price freeze lasted until 1981; Eisenhower’s 1959 oil import quotas lasted until 1973. In both instances, government policy did a lot of unnecessary harm because the energy supply situation changed much faster than government policies do.
This led me to “wonder whether government policy will inevitably be a day late and a dollar short when it tries to manage future energy sources.”
In retrospect, I was too kind to environmental law. I recently assessed the successes and failures of EPA over 50 years of trying to clean up the air, a task that the drafters of the 1970 Clean Air Act originally imagined could be accomplished in five. I concluded that EPA sometimes takes 20-30 years to accomplish relatively simple objectives such as getting the lead out of gasoline. The long delays are caused by our standard regulatory techniques such as notice and comment rule-making and judicial review.
Lawyers call these long delays regulatory lag. The problem is commonplace in setting electric utility rates; by the time a rate proceeding ends, the situation has changed; these delays in setting rates cause “utility managers to act inefficiently and increase[s] consumer costs.”
Friedrich von Hayek, who won the Nobel Prize for economics in 1974, famously wrote that markets were inherently more efficient than central planning, because they assimilate more information than any human mind or bureaucracy could ever hope to understand.
What is not generally appreciated, however, is the corollary that traditional regulation often can’t keep up with rapidly changing markets.
That is not to say that markets are perfect. There can be discrete market failures that lead to arguments that the law should step in to try to correct problems. But the slowness of legal techniques for regulating markets counsels that we should be careful not to do more harm than good by winning the last war when situations have changed.
Recently, however, proponents of the ever-expanding regulatory state have adapted new techniques that may enable government to regulate much more quickly and efficiently. These artificial intelligence techniques harness the power of computers to make government decisions in a fraction of the time required by traditional regulatory techniques. The Administrative Conference of the United States, an independent federal agency that convenes experts to recommend improvements to administrative process and procedure, is currently studying how federal regulatory agencies can “take advantage of these new tools in ways consistent with due process and other legal norms.”
But the revolution that some scholars call “regulating by robot” is already underway.
Today air pollution limits on factories are not really set by human beings, but by EPA’s standardized computer models, and new chemicals are approved based on an EPA computer program, the Toxicity Estimation Software Tool (TEST), which predicts whether chemicals will be toxic.
The SEC targets companies for investigation using computers, and other agencies process claims using algorithms rather than human judgment.
Whether computers and artificial intelligence will expand the regulatory state by giving government the ability to regulate more cheaply and quickly — and effectively — remains to be seen. One of the leaders in the field, University of Pennsylvania law school professor Cary Coglianese, argues that expanded use of artificial intelligence in the regulatory process holds promise to correct some of the failings of traditional decision-making by fallible human beings, but it also poses new challenges.
Not the least of these challenges is that judges tend to suffer from automation bias, which is the tendency of non-experts to be overly deferential to decisions made by computers.
A sad example is the case of Hayward v. Department of Labor, which involved whether the widow of a worker who died of cancer after exposure to radiation in a Department of Energy facility was entitled to compensation.
The Fifth Circuit Court of Appeals upheld the agency’s denial of compensation based on a computer program that did not consider the fact that the dead husband’s particular type of cancer was exceedingly rare and therefore was unlikely to have resulted from anything other than the exposure to radiation. The judges did not even deign to engage that argument on its merits; they merely said the issue was “on the frontiers of science,” so they were going to defer to whatever the computer had decided.
After visiting the Soviet Union in 1919, Lincoln Steffens reportedly said, “I have seen the future and it works.”
I too have seen the future, and it is scary.