The headline on the first page of the New York Times Sunday Opinion page in early July could not have been more stark or more menacing: “The True Threat of Artificial Intelligence: Technology Forged by Private Markets Won’t Solve the World’s Problems. It Will Only Amplify Them.”
It that statement doesn’t get your attention, it’s likely that nothing will.
It would be difficult to identify a technology that has been talked and written about than those under the umbrella of artificial intelligence or AI.
What is AI?
The term AI is generally used to include big data, artificial intelligence and machine learning, a trifecta of technologies that impact our lives today, and promise to have an even more profound effect in the future.
Sadly there is vastly more heat than light when AI-technologies are discussed in the media.
The article mentioned above is a prime example. It notes, for example: “More than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. They said that mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I suspect that we all agree that the phrase “risk of extinction” is pretty strong stuff.
Another article entitled: “Robot Overlords? Maybe Not,” quoted Alex Garland, director of the movie Ex Machina, who talked about artificial intelligence and quoted several tech industry leaders.
The theoretical physicist Stephen Hawking told us that “the development of full artificial intelligence could spell the end of the human race.”
Elon Musk: Dangers of AI
Elon Musk, the chief executive of Tesla, told us that A.I. was “potentially more dangerous than nukes.”
Steve Wozniak, a co-founder of Apple, told us that “computers are going to take over from humans” and that “the future is scary and very bad for people.”
More strong stuff. More recently, the advent of AI-powered technologies such as ChatGPT, Bard and Bing have sent concerns about AI into overdrive.
Is it any wonder that the general public has become skeptical – or even fearful – of AI and why many are calling for a complete halt to AI development?
We (my fellow Coronado colleague – Sam J. Tangredi) think that these fears are overwrought.
We base this opinion on years of research that has been manifested in many articles in
professional journals, numerous papers presented at professional conferences, and in two books, “AI at War: How Big Data, Artificial Intelligence and Machine Learning Are Changing Naval Warfare” and the forthcoming, “Algorithms of Armageddon: The Impact of Artificial Intelligence On Future Wars” (both by the U.S. Naval Institute Press).
There is a saying attributed to H. L. Mencken: “For every complex problem there is an answer that is clear, simple and wrong.” In spite of this caution, it is rather straightforward to get to the bottom of fears about AI and suggest that there is a simple way to unpack concerns about artificial intelligence if we recognize the fact that the term AI is far too broad and encompasses too much.
Narrow or weak AI
Current AI systems, often referred to as “narrow AI” or sometimes “weak AI” are ones that exist today.
These are algorithms that suggest pithy responses to an email, finish your sentence in an
email, provide buying suggestions on various websites, defeat champions at chess or on game shows or many other tasks that happen in our daily lives t often happen without our awareness.
This narrow AI is designed to only perform as if a human for one very specific function.
Many technology advocates – and increasingly the general public – believe that narrow AI can and will evolve into “artificial general intelligence” (AGI).
As the New York Times article points out: “AGI doesn’t exist yet, but some fear it could. However, that would mean that technologists will build systems that are generally smarter than humans.”
Hype has outstripped reality
And therein lay the rub. The hype has outstripped reality. We would have to convince ourselves that humans will have the means (at present, highly doubtful) and the desire (again, why would they do that?) to create machines that will trump our human abilities, take over the world and end humanity because we humans are no longer necessary.
These dystopian scenarios are the stuff of science fiction, but for some reason many take the leap that if computers can beat professional players in Texas Hold ‘Em Power, they will somehow decide become too powerful to control.
While there is a popular term, “Never say never,” that is where our money is: We humans are a smart lot and we won’t decide to build computers that even approach human consciousness.
Finally, it is worth recalling that due to marvelous benefits that cutting-edge technology has bestowed on humanity we do have a tendency to buy-in to technology hype.
To provide just one small example, a decade ago, legions of tech experts, venture capitalists, government officials and many others predicted that driverless cars would be ubiquitous on our highways today. That hasn’t happened, and as deaths caused by driverless cars mount, the hype-induced vision of a driverless car future has dissipated.
We can manage AI. It will not manage us.
Capt. George Galdorisi (USN – retired) is a Coronado resident. He is career naval aviator
whose 30 years of active duty service included four command tours and five years as a carrier strike group chief of staff. He is the author of 15 books, including four New York Times best-sellers. The views presented are those of the author, and do not reflect the views of the Department of the Navy or the Department of Defense.