The last 100 years have seen extraordinary leaps in not only science and technology, but also in the fulfillment of Bible prophecy. Thousands of years ago, God revealed to the prophet Daniel that the culmination of end-time events would be accompanied by an increase in human knowledge:
"Daniel, shut up the words, and seal up the book until the time of the end; many shall run to and fro, and knowledge shall increase" (Daniel 12:4).
Human annihilation at our own hands is possible, but there is one immensely important caveat: God will not allow us to destroy ourselves completely.
While artificial intelligence (AI) doomsday scenarios have been a staple of science fiction for decades now, recent improvements in AI are bringing it closer to being an integral part of our daily lives, from self-driving cars to digital assistants. Thankfully, none of these yet resembles the free-thinking, self-replicating, indiscriminate killing machines we see in the movies! However, many big names in science and technology are warning of the approaching "singularity"—the theoretical tipping point at which an artificial intelligence emerges that far exceeds human intelligence, inevitably developing an unpredictable mind of its own.
Is artificial intelligence progressing into dangerous territory? And does the Word of God have anything to say about the human ability to create a disastrously destructive AI that could end human existence?
From brute-force calculations to adaptive learning
Keeping up with the headlines can be unsettling as AI's continue to perform better than humans in more and more tasks. In 1997, IBM's Deep Blue chess-playing AI made history by edging out the world chess champion, Garry Kasparov, in an intense six-game series. This might seem unremarkable if you consider how they did it. After all, chess is a finite game, and Deep Blue and other chess AIs have the advantage of being able to perform billions of calculations per second—far more than any human player could consider! The programmers used this imbalance to their advantage in order to play to the strengths of a computer in this matchup. They combined a staggering amount of computations with some simple guidelines about what makes a "good" chess move, and this was enough to overcome the skill and intuition of the best human chess player in the world.
However, Deep Blue lacked a defining characteristic of intelligence: the ability to learn from mistakes and modify its behavior. It became better at chess over time—good enough to beat Kasparov after losing to him in their epic first series of matches in 1996—because its creators simply kept updating the information available to it. However, Deep Blue had no ability to learn to play better chess on its own. Once Kasparov had beaten Deep Blue the first time, he could have simply played the exact same move set to win over and over again, unless Deep Blue was updated by its programmers after each game.
Until recently, it was speculated that computers may never be able to beat grandmasters at Go, an ancient board game of Chinese origin. The number of possible outcomes in chess after each player has made three turns is only around 121 million. The number of possible Go scenarios after three moves is so large that it is difficult to even describe—it has millions upon millions of times more outcomes than there are atoms in the universe. A brute-force processing approach like Deep Blue used for chess is still far out of reach for Go. Nevertheless, history was made earlier this year in March, when Google's AlphaGo AI thoroughly trounced renowned Go grandmaster Lee Sedol, winning four times out of a five game series. The secret to AlphaGo's success is that its creators took a fundamentally different approach than Deep Blue. Striving for something that behaved more like human intelligence, they created an artificial neural network capable of learning from the matches that it plays, and their unexpected success is proof of the potential that lies in this idea.
From innocent board games to fierce aerial combat
If all of this still sounds a bit innocent, consider a more chilling example. In the last month, a team of researchers published results in the Journal of Defense Management in which they tested an AI called ALPHA (not related to Google's AlphaGo) designed to control a fighter jet in a combat simulator. In this experiment, a retired Air Force Colonel with substantial experience as a pilot battled with the AI in the simulator. He was outmaneuvered and shot down in every single simulation, unable to score a single kill. According to Colonel Gene Lee, ALPHA is “the most aggressive, responsive, dynamic and credible AI I've seen-to-date.” And this was not done by some supercomputer—the clever AI runs on a tiny, commercially available computer that costs a mere $35. As with AlphaGo, ALPHA relies on an adaptive machine learning algorithm that is designed to improve itself with experience.
When man discovered gunpowder, he used it to create guns, cannons and explosives. When man demystified the secrets of the atomic structure of matter, he used it to create devastating nuclear weapons. As understanding of biological and chemical processes have progressed, so has the gruesome array of their associated weaponry. This has been the pattern of mankind's behavior for a very long time. Given the destructive potential of AI, it is foolish to think that it will not be used for this purpose. It is only a matter of time until the world's fighter jets and drones are equipped with something like ALPHA on the battlefield.
Realistically, an adaptive learning software that is designed only to become better at shooting down planes is not going to suddenly become self-aware and learn how to read and understand Mandarin (and so on from there until global domination)—it just doesn't work that way. However, these algorithms are all incremental progress towards the greater goal of creating an artificial intelligence with not only the capacity to learn but also to think, which includes several other complex functions. Researchers across the world are collaborating toward this very goal. At the Tower of Babel, God commented on what can happen when people work together unhindered:
"Indeed the people are one and have one language…now nothing that they propose to do will be withheld from them" (Genesis 11:6).
Our communications and transportation technologies have led to a globalization that has returned us to the Tower of Babel scenario—even the nightmarish AIs of science fiction will eventually be possible.
The ultimate question: How bad will it get?
In Matthew 24:21-22, Jesus warned of a "great tribulation" capable of ending all human life. Students of Bible prophecy took note of this when the nuclear bomb was horrifically unveiled to the world in the 1945 bombing of Hiroshima and the ensuing nuclear proliferation of the Cold War, marking the first time in human history that mankind had the ability to completely destroy itself. The destructive tools likely to be used during the great tribulation have only increased in magnitude and diversity since then, and AI is poised to become just one more way that Jesus' words could play out.
Human annihilation at our own hands is possible, but there is one immensely important caveat: God will not allow us to destroy ourselves completely. Yes, a horrible time of death, famine, and sickness of unprecedented scale is coming, and much of this will be our own doing. But Jesus told us to expect this before His return to save the world from itself.
It is important to recognize that today's advances in technology are the fulfillment of Bible prophecy, and it is absolutely vital to know what the outcome will be. For more information about the prophetic signs that must occur before Jesus Christ returns—and especially which ones have already come to pass in your lifetime—check out our free Bible study aid, Seven Prophetic Signs Before Jesus Returns and subscribe to Beyond Today Magazine for the latest developments as they unfold.