Artificial intelligence (AI) is not only the ingredient technology we see in nearly every new launch, it’s also an area that is stumping policy makers around the globe. When I returned from Davos where I had the opportunity to engage with some of the key players, I couldn’t help but reflect on a sobering reality that recently surfaced for me. In December, I helped represent the U.S. in Montreal at the G7 conference on AI. Academics, privacy advocates, and government policymakers from six other liberty-loving allies were in agreement that every product with AI must be cyber-risk-free, must protect privacy, must have transparent algorithms, must promote diversity and inclusion, and must be usable by 100 percent of the population.
These are laudatory goals, but are these absolutes realistic? Or should their achievement be balanced against saving lives, protecting children, eliminating hunger and disease, reducing suffering and raising the world’s standard of living?
We are at a crossroads when it comes to innovation and technology — yet no one at this G7 conference seemed to realize it. AI may be the most important advancement of the decade. It’s a foundational technology — one that will transform nearly every industry. Through the combined power of algorithms and big data, AI can bring us to new heights of efficiency, transforming education, finance and medicine. It will make us safer, healthier and wealthier — one source estimates that by 2030 it will add $3.7 trillion to U.S. GDP.
But the global competition is fierce.
China has launched an ambitious plan to lead the world in AI within the decade and is already working to build a $2 billion AI research park in Beijing to spearhead those efforts. More, China has millions of trained engineers, mathematicians, data analysts and software experts. It also gathers huge amounts of data from its huge population and data is the fuel for AI.
Rather than brainstorming ways to move forward and remain competitive in the global race for AI, the academics, government and NGO speakers at G7seemed more preoccupied with restraining AI through new laws. The assumption seemed to be that AI was something potentially dangerous, something that had to be totally private and completely transparent — with a ready explanation of every algorithm clearly available — before it could be allowed to reach the market.
Even at this early stage, AI is being used in many ways that improve society and individual peoples’ lives. It is helping farmers save water and reduce chemical use on their crops. It is helping medical professionals make quicker and more accurate diagnoses. It is being used to fly the planes we rely on to get from city to city.
We need to be careful of broad restrictions and mandates choking this nascent technology when over 90 percent of the use cases have no risk of bias, prejudice, or harm to our noble principles of equal treatment. We need to consider how the technology is being used and weigh likely benefits against imaginable but realistic harms. Different rules may be needed for AI use in healthcare than for AI use in precision agriculture.
Turning every tech breakthrough into a social issue is a recipe for our competitive decline.
I’m aware — and agree — that the systems we shape today will impact the security and diversity of the workforce of tomorrow. Bias is something we must take seriously and combat. And we need effective cybersecurity protections to preserve both personal privacy and economic growth. But throttling emerging technologies in their early stages is not the way to go.
In most AI use cases, such as factory automation, these issues would be irrelevant. By insisting that all AI products be perfectly equitable and completely secure, we will forfeit gains in areas where these issues are addressed differently, sidestepped, or subject to a more authoritarian approach, such as in China.
I found the experience frustrating. As I kept thinking we’ve been down this road before, I actually had a huge déjà vû day.
About ten years ago, well-meaning advocates for people with disabilities tried to get a law passed which would have required that every device that connected to the internet be responsive to every kind of disability. We were just starting out with smarter phones, tablets and high-performance laptops. We were the sole group to vigorously oppose this proposal. We argued government was trying to design smartphones and violate the laws of physics. In a Congressional hearing on the subject, then Telecommunications Subcommittee Chairman Ed Markey pitted me against a disabled veteran and asked this hero if he would trust the free market to create features for disability access. It was a stunning moment for me.
Fortunately for everyone, especially the disabled community, the original proposal was vastly modified to inject reasonableness. Had the original proposal passed, most of the hundreds of great apps and features for smartphones and tablets that empower the disabled community would never have been conceived or available today. There are scores of examples — such as Aira, a tech company that connects people with vision impairments to trained guides, or TechUrElders, a chatbot that provides caregivers helpful digital tech and education tools. Today, we work closely with the disabled community and celebrate the many innovations entrepreneurs keep creating which change lives for people with disabilities. CTA, including the CTA Foundation, engages with industry and policymakers about the needs of the disability community, helping to create more opportunities for the industry to help.
Every day I wake up, I am grateful that proposal did not pass. But I fear we will lose great future advances in AI for well-meaning reasons.
Time is short, and tech moves fast. We must give technologies such as AI an opportunity to breathe and develop before we tie them down with burdensome rules. Balance is key here — as it always has been. The reason the U.S. has lead the world in innovation for the past several decades is, in part, because we’ve developed a light-touch regulatory policy that protects consumers and encourages innovation.
This, in turn, has advanced our democracy.
The platforms, apps and websites that make our daily lives more efficient and engaging promote the free flow of ideas and the free exchange of goods and services more efficiently than ever before. Surrendering our approach now means surrendering our prosperity and even our health and potentially longer lives in the years to come.
By working together and creating reasonable guardrails — innovators and regulators, private sector and public sector, creators and consumers — we can create a world where freedom, not fear, rules the day.
Gary Shapiro is president and CEO of the Consumer Technology Association (CTA)™, the U.S. trade association representing more than 2,200 consumer technology companies, and a New York Times best-selling author. His newest book, Ninja Future: Secrets to Success in the New World of Innovation, is available now. His views are his own.
Notice to Readers: The American Spectator and Spectator World are marks used by independent publishing companies that are not affiliated in any way. If you are looking for The Spectator World please click on the following link: https://spectatorworld.com/.