A.I. Anxiety

Gary B. Swanson

A.I. Anxiety

Economists are saying that artificial intelligence (A.I.) is bringing the world to the cusp of a fourth industrial revolution. And they are concerned. There are, of course, obvious and immediate benefits that have already resulted from the development of A.I. But the World Economic Forum estimates that its use in the workplace will eliminate as many as five million jobs in the economies of the developed world over the next five years. Interestingly, as smart machines assume the more routine roles in the workplace, two-thirds of these reductions will impact the so-called white-collar sector.1

And economists are not alone in their uneasiness. Philosophers, scientists, and even leaders in the field of technology have expressed concern over the idea that machines may ultimately pose a threat to the future of humanity. People like Stephen Hawking and Bill Gates.

As dependence grows on technology, the question of control has arisen. Today, machines pilot our airplanes and will very soon chauffeur our cars. Advertisements that appear on our Internet browsers and social media are the result of computer algorithms based on our own personal online behavior. Facial-recognition programs can identify us in a crowd.

The theme of the potential danger of technology run amok isn’t new in popular culture. One of the earlier classic examples of this in film is represented in the computer HAL 9000 in 2001: A Space Odyssey. Critics have pointed out that the conflict in this 1968 film between HAL and the human astronauts aboard a spacecraft centers on their desperate struggle to achieve a singularity in which either technology or humankind will emerge into the next phase in the evolution of earthly sentient beings.

Thirty years after Space Odyssey, another film—in fact, a trilogy of films—carried the theme of the competition of humankind and machine to an even more grim theory. In The Matrix, humanity and A.I. come to a literal face-to-face confrontation. “Every mammal on this planet,” says the artificial intelligence, “instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply and multiply until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You are a plague. We are the cure.”

In 1963, British mathematician and cryptologist I. J. Good warned, “An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Is humanity fully in control of its own inventions?”2

Technological advancement has only moved thinking to some even more unsettling ideas regarding A.I. As its capabilities have increased to ever more sophisticated levels, troubling questions among humans proliferate. And these questions aren’t being posited merely by producers of popular culture.

Swedish philosopher Niklas Boström has given A.I. some serious thought and raised some disconcerting challenges. Though he insists that they are not predictions, through some thought experiments, he urges more careful consideration in the development of technology:

● If a superintelligence were given the task of making paper clips, what would prevent it from eventually concluding that all humans should be made into paper clips?

● If A.I. were designed with the “prime directive” never to harm humans, what if it decided that the best way never to harm humans would be to prevent them from being born in the first place?

● If ultra-intelligent machines were programmed that no matter what, they should always make people smile, what if they decided the best way to do this would be to install electrodes in all humans to make them smile?

Boström’s imagination can get even more eerie. He offers the possibility that superintelligence could in theory have already superseded its human creators and placed them in an artificial human existence. “‘I’m not sure,’” he says, “‘that I’m not already in a machine!’”3

OK, granted that these scenarios certainly sound like something that would occur only in the further reaches of science fiction, but judging from the questions that still persist from a 50-year-old science-fiction film, it appears that popular culture can sometimes foretell some issues. Today an emerging “A.I. anxiety” has begun to affect even those who are creating these amazing technological advances.

And here is where the dilemma that humankind currently faces with its technology begins to sound similar to that which the original Creator of earthly intelligences must have faced. “Throw in elements of autonomy,” says one writer in describing the creation of A.I., “and things can go wrong quickly and disastrously.”4 Throw in elements of free choice in the creation of humankind, and though it isn’t clear whether it went wrong quickly, it certainly went wrong disastrously.

God didn’t off-handedly “throw in” elements of free choice in the creation of humankind. Freedom of will wasn’t merely included as a kind of accessory in the nature of humanity. It was central to its character because otherwise there could be no love between creature and Creator.

“God might have created man,” wrote Ellen G. White, “without the power to transgress His law; He might have withheld the hand of Adam from touching the forbidden fruit; but in that case man would have been, not a free moral agent, but a mere automaton. Without freedom of choice, his obedience would not have been voluntary, but forced. There could have been no development of character. Such a course would have been contrary to God’s plan in dealing with the inhabitants of other worlds. It would have been unworthy of man as an intelligent being, and would have sustained Satan’s charge of God’s arbitrary rule.”5

There were no philosophers at the creation of humankind to express A.I. anxiety about the potential danger of such beings. There were no scientists to voice concern that these new beings may someday attempt to outsmart or supersede their Creator.

Indeed, this is what Lucifer—an ultra-smart creature—intended to do. And he succeeded in passing along to other creatures the desire that “‘your eyes will be opened, and you will be like God’” (Gen. 3:5, NIV).6

The current human concern over the possibly fearful advances of A.I., of course, are not that it may become the equal of humanity but that it will succeed humanity. And this is frightfully similar in principle to what was in the back of Lucifer’s mind: to succeed his Creator. Millennia after the origin of sin, “The devil took [Jesus] to a very high mountain and showed him all the kingdoms of the world and their splendor. ‘All this I will give you,’ he said, ‘if you will bow down and worship me’” (Matt. 4:8, 9).

It is likely that Satan had more in mind than being merely like God. There was every intention to surpass God. How else could Satan have hope to be worshiped? In what is left of the future of humankind till the return of the Creator to rescue this sin-twisted planet from its own inventions—technological or spiritual—there is only the final answer from the Creator Himself: “‘Away from me, Satan! For it is written: “Worship the Lord your God, and serve him only”’” (vs. 10).

 

NOTES AND REFERENCES

1. http://www3.weforum.org/docs/WEF_FOJ_Executive_Summary_Jobs.pdf. Websites in the references were accessed in January and February 2016.
2. http://www.acikistihbarat.com/dosyalar/artificial-intelligence-first-paper-on-intelligence-explosion-by-good-1964-acikistihbarat.pdf. 
3. http://theweek.com/articles/599293/why-scientists-are-worried-about-intelligent-machines.
4. Ibid.
5. Patriarchs and Prophets, p. 49.
6. All Scripture references in this editorial are quoted from the New International Version of the Bible.