The Threat of Artificial Intelligence

Is this something to worry about?

We’ve heard the warnings, we’ve seen the movies, but how real is the threat that Artificial Intelligence will someday take over mankind?  Stephen Hawking (1942 – 2018) generally believed that AI would help mankind in great ways as we begin to develop it (his latest speaking device was powered by AI tech, after all).  But he also warned that it could destroy civilization as we know it if we’re not careful.  https://www.newsweek.com/stephen-hawking-artificial-intelligence-warning-destroy-civilization-703630 .  An interesting take, but what would such an AI takeover actually look like?  Science fiction has given us numerous scenarios of what sort of computer or robot apocalypse could be over the horizon.  

I would narrow this down into three categories.  The first scenario involves a gradual move in which AI begins to realize its superiority to humans and eventually becomes the dominant intelligent life on the planet.  It might be a malfunction or a coordinated attack in which eventually it’s an “us versus them” conflict.  While we don’t know much about the original human/computer conflict in the Matrix movies, the future depicted reflects this sort of reality.  This might involve a robot revolution or simply a survival of the fittest situation.  The second category would be the all-out war where the computers become self-aware and suddenly turn on their makers.  In this future, AI is connected to the military or originally intended as a government defense and security measure that is able to break free of whatever fail-safes are present.  Instead of protecting mankind, the machines attempt to destroy it.  This is best shown in movies like the Terminator series.  Finally, there is the idea that AI doesn’t take over at all but rather humanity willingly defers to its computer overlords, in which the computer algorithms basically become the ultimate authority for the masses.  Let’s look at all three in closer depth.

1: Robot Evolution

In this future of a robot or machine takeover, the robots gradually become superior to humans and therefore become the de facto rulers of humanity.  Isaac Asimov takes us through what this scenario might look like in the now classic, I, Robot, which generally takes a very positive view of robotics.  However, in this series of short stories, the robots go from doing menial labor to participating in the highest levels of government.  They are seen as mostly benevolent, but even the “laws” hardwired into them begin to become blurred as they become more intelligent and adaptive.  

These laws of robotics, which have been repeated numerous times in science fiction stories that have followed, are quite simple:

robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  

The movie, I, Robot in 2004, takes this in a direction that allows a particular robot to come to the conclusion that humans are their own worst enemies and that many must be destroyed to ensure humanity’s survival. (Overly simplified but also a theme in other AI takeover stories).  Even if some element of these “laws” were supposed to be put in AI and they couldn’t be broken by some twisted logic, there are a few concepts that could support such a scenario.  One merit in this theory is that many humans are not moral so it stands that many of their creations would not act morally nor have the above laws programmed into them.  We are creating sophisticated AI today that have the ability to learn and adapt.  Also, AI is already advancing at an exponential pace and science today is notorious for not applying ethics until after someone challenges it or a problem emerges.  

However, the problems with the plausibility of this apocalyptic future are many.  The machines would have to be built to be stronger than humans and capable of fighting or subjugating people in some way.  Most computers and mobile devices or factory robots simply do not have the ability to harm humans.  (Literally, they have a very limited range of motion, if any) And if someone were to make such machines that could, it would be an extremely small number and quickly destroyed if suspected of such a potential.  Even if some malfunctioned that could harm a person, like a self-driving car, this would be an isolated incident (and has already occurred) that would be quickly remedied.  It is also worthwhile to note that the vehicle’s computer did not intentionally kill its passenger.  Indeed, this is the main reason why this future scenario seems unlikely.  The computers and intelligent machines that we use are created to help and not harm mankind.  It would require a very different programming to create machines designed to harm rather than help.  

Also, machines as of now still need a source of power to operate, which can easily be denied them.  In many of these robot evolution movies, the machines or “synthetics”, are vast in number and look like humans – but are built to be stronger, faster, smarter, and run on some sort of indefinite source of energy.  I always find this perplexing and almost comical.  Why would we do that?  It makes no sense to create androids of such caliber and give them complex AI to learn and reason.  To create such a race that we would call our possessions would be simply asking for a revolution.  Besides, there seems to be no reason or call to start mass producing human looking and acting robots now or in the foreseeable future, especially ones that are designed to be indestructible.   

2: Military Robot Revolution

The next vision of a future of machine overlords could be a bit more plausible.  The most obvious example of this can be found in the modern myth of Skynet, the artificial neural network portrayed in the Terminator movie series.  Unlike the above situation, these machines are designed and programmed to defend and to do so violently.  Not only are vehicles and weapons designed and programmed to kill, but they are all put on one network for quick and decisive communication and execution.  The computers are even given control of the nuclear arsenal, with the thought that this will eliminate the chance of human error.  Such a scenario carries weight because we do have weapons of mass destruction that could wipe out the human race, we have developed a world wide web in which information is connected, and AI continues to advance at a very fast rate.  And while my first objection is that the government would not give all the military authority to a computer program, government officials have increasingly proven themselves to lack basic common sense.  

With that being said, such a scenario still seems unlikely.  For a completely automated military to exist, so many levels of safeguards and redundancies would have to be overcome and this seems improbable.  Reaching such a level of artificial intelligence and giving it autonomy seem contradictory at best given the amount of stories of warning that depict such scenarios.  Also, we are talking about computers in charge all across the globe, which would require each government to be on board with such a proposal.  And while the internet of things is steadily stretching out to achieve a worldwide reach, there are plenty of defense systems, humans included, which cannot be controlled by the machines.  The other objection to such a military takeover is intent.  Computers or machines would have to have some sort of programming within them that would make them “believe” that human annihilation would be a logical thing to do.  Science fiction may give us reasons for the machines to think so, but actual programming seldom does.  

3. Willing Slaves

This third scenario is one that I compare to the old saying about a frog in water.  The tale goes that if a frog is thrown into a pot of boiling water, it will quickly jump out to avoid its fate.  However if that same frog is placed into a pot of room temperature water, it will happily remain there even as the temperature is raised, until the unsuspecting frog is boiled.  I don’t know if this is true or not, but it sets up an analogy of such a future.  In the Matrix, Agent Smith tells Morpheus, “I say ‘your civilization’ because as soon as we started thinking for you, it really became our civilization…”  This is the type of AI threat that I see most plausible, one in which we allow the computers to start thinking for us.  And I believe that this can already be seen through what is happening on social media and “smart” things.

In the social media culture, there are a few “thought leaders” or “influencers” that gain millions of views, likes, and copycats.  If the few are leaders, then the millions are followers.  They accept what is popular, what is funny, what is bad, and what is desirable.  They are convinced of who to applaud and who to shame.  And most alarming, what to believe.  This of course did not begin with the internet.  Politicians, religious leaders, radio and television have been doing this since they were around through propaganda and advertisements.  But now the audience is so much more vast; it is global.  And as we have learned; stories, content, and comments are now generated specifically for the consumer and are often done so by computer algorithms and even foreign political disruptors.  But regardless of how these stories are populated, they do influence and they do so in a big way.  

It is argued that they influence elections and morality, as well as our daily, almost unconscious life; they tell us what to buy, where to shop, what music to listen to, and what to watch.  In many ways, we have become the humans floating in their cradles on the big ship in Wall-E.  Why do you think companies like Amazon and Google are so successful?  They control the searches.  And to bring it home, it is no longer people that control the searches, but computer programs.  Programs that know our habits, likes and dislikes, and use them to further modify our options and make those options convenient, oh so convenient.  In short, they have started thinking for us.  Smart phones were only the beginning.  We now have smart houses, and will soon have smart cars that will drive us, using GPS systems designed to take us to where they convinced us that we wanted to go.  And we are willingly handing over all these freedoms because of convenience.  We refuse to give governments, or police, or strangers our personal information in fear of losing our privacy and freedom but we have gladly given all this information and more to the computers.  Computers that are indeed moving in a direction of more and more complex algorithms, or in other words, artificial intelligence.  I fear that while the robot revolution will be slow, it has already started.  

Not to end on a negative note.  I do also believe that such a fate can be reversed, but it will take a conscious turn from the convenient.  People need to think for themselves and use technology as a tool instead of an excuse to be lazy.  This is done through actual research, lived experiences, participation with local communities (instead of just internet thought communities), and a connection to the spiritual.  In other words, we need to cultivate Wisdom, which is a completely different thing than intelligence.  I do believe that we are still a ways off from allowing computers to completely think for us, but if we continue to happily forfeit our freedom to do the above, it could come a lot quicker. 

One thought on “The Threat of Artificial Intelligence

Leave a Reply to Mark Kearney Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: