What do you fear the most? When I asked this to people at Woodstock, the most common replies were spiders, heights, closed spaces, monkeys, and loneliness. Although these sound like things that would give me nightmares as well, the one thing that really sends shivers down my spine would be AI (artificial intelligence). Yes, AI.
What is AI? The replies that Siri gives about the time or weather, the results found in Google Search, and you can see that the fake news that has been destroying our democracies are all powered by AI. Now, you might think I am crazy to find them threatening; proponents believe AIs are beneficial and useful.
Now, I can’t completely disagree with those people because AIs do help us work more efficiently. If not for the biometric AIs at Delhi airport, it would take hours for each person to complete the immigration process. AIs can also make new discoveries that we can’t make. For example, AIs in science and medicine discovered new uses of existing drugs and were able to spot cancer cells at earlier stages. It’s also impressive that AIs can improve existing things into a better model. Google and Tesla, thanks to AIs, can power self-driving cars and are developing to build an AI chip for cars. It all sounds fascinating and remarkable that AIs are refining our work.
But, what will be the point of humans in these fields? Nothing. “Job loss is probably the biggest worry,” said Alan Bundy, the University of Edinburgh’s professor of informatics. Before I started research on AIs, I wasn’t so worried about unemployment, but the alarming number Gartner, a global research firm, predicts is around 1.8 million calls for concern. However, Gartner assures that AI will create 500,000 more jobs than the number of jobs lost. But what will these new jobs be? Jobs in which we have to produce and control AIs. However, AIs are capable of replicating themselves. This isn’t surprising news as AIs are predicted to be exponentially smarter than humans in coming years. This means that, over time, fewer people would be required to produce AIs leaving many unemployed.
Yet, the job in which people have to manage AIs remains, but for that, we need skills, which we learn in school. At Woodstock and most other schools, we, unfortunately, don’t address or learn about AIs in classes and, so, we can’t acquire skills to deal with them later on. And if we as a generation are in the dark about AIs, we are simply putting power into their hands. As James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era,” said, humans “steer the future” because “we are the smartest,” and so if “there is something smarter than us on the planet, it will rule over us.”
Or even worse, someone with complete possession of this technology “might cause the extinction of intelligent life on Earth,” explains Nick Bostrom, an Oxford University philosopher on superintelligence risks, in one of his numerous horrifying doomsday settings. This suggests that we have created something that can have negative repercussions.
Without taking the threats of AI technology into account, the United States signed an $885 Million contract to install AI robots in the Pentagon. These robots will be in charge of the second most powerful nuclear arsenal in the world; if a malfunction occurs, we all could be doomed. Are you still thinking that we invented something beneficial and ‘useful’?
AIs have ignited an arms race between countries. In response to the Pentagon contract, China has invested a large amount of money into an artificially intelligent army that has kicked the U.S. to the back of the AI race.
I fear we have created a monster that is smarter and stronger than us, and important intellectuals in this field feel the same way. Silicon Valley resident Elon Musk can see how AIs are “more dangerous than nukes,” and Stephen Hawkings wrote that AIs “might be the last” event in history. Bill Gates, the founder of Microsoft, also thinks that “humans should be worried about the threat posed by Artificial Intelligence.”
And the list of other experts in science and technology with a concern about AIs goes on. Their biggest concern is that AIs can make decisions like us, but they cannot show compassion and morality like us.
When sick, injured, or diagnosed with a terminal disease, we want doctors and nurses to offer the best possible care — both clinically and emotionally — for us. One can, of course, say that robot doctors provide treatment clinically and precisely unlike human doctors who make mistakes on occasions, and it is a valid point. However, Bostrom argues that what if a robot doctor malfunctions and decides that the most efficient way “to obliterate cancer is to exterminate humans who are genetically prone to the disease” instead of using their intelligence to save lives by treating cancer cells. And, if these robot doctors decide to do the same for all diseases, there will be nothing left of the human race. Is this really what we want?
And, that’s not all, the support from a human doctor guides the families of patients to decide on a treatment method and become less worried. Can a robot doctor guide them with empathy? Their unhuman-like actions and artificial voice would make patients feel more in the dark than the warm and understanding voice of a human doctor would, which reassures patients in their dark times.
We want to hear that compassionate voice that makes us feel cared for when sick at the hospital or the health center. And, this compassion we crave is not what robots can give us because they are ‘artificial’ and phony, but what humans can give because we have real emotions.
So, Woodstockers, it is not spiders, closed spaces, heights, monkeys, or loneliness that should be your biggest concern. But, it should be something that is the inevitable reality; where jobs are lost, where armies of super robots are built, and where phony emotions take over real ones. Artificial intelligence will dehumanize us.
Edited by William Raggett