Firth’s Tech: Can artificial intelligence kill us all? Most likely, yes.

David Firth is a professor of management information systems in the College of Business at the University of Montana and a faculty fellow with Advanced Technology Group in Missoula.

Yes. In fact, there is already a movie out about how this will happen: the 1984 Terminator movie starring Arnold Schwarzenegger. The problem with the movie is that it includes time travel, which likely is not an option.

However, the rise of the machines called terminators is a definite possibility with the advances in artificial intelligence we’re seeing. In the movie, a system call Skynet goes through enormously fast learning and becomes self-aware (at 2:14 a.m. on August 29th, 2013 in case you’re wondering) and, since it is connected to the country’s nuclear weapons system, it decides milliseconds later to kill off all the humans.

Fantasy? Just the time travel part (although, as a true aside, as a physics grad I am obligated to say that quantum mechanics does allow for tachyons that theoretically allow for time travel). Google’s artificial intelligence project DeepMind was used to build a program called AlphaGo that plays a very simple game that is actually much more difficult than chess because of all the possible alternatives.

These alternatives add up to more atoms than exist in the entire universe. Before a five-round match against one of the greatest Go players on earth, the president of the British Go Association was expecting it would be five to 10 years before a machine could beat a human. The machine won 5-0.

But that machine had to be programmed and used human data from hundreds of thousands of real Go matches to help it understand how to win. It took just three years to be good enough to beat the best human Go player, who at 35 has been playing internationally ranked games for 20 years.

Much more impressive, and scary, though, is what came next: AlphaGo Zero. This version of the machine was given no human data and was not programmed. It was just given the rules of the game and left to itself. In just THREE days it was able to beat the AlphaGo machine, and presumably every human on the planet. In fact, the Google team reported that it took less than 24 hours to reach super-human status.

So, we’re already at a place where machines can teach themselves from very simple rules how to become super-human in less than 24 hours. Can’t we just regulate artificial intelligence (AI)? How well has that worked in regulating Russia, China or North Korea?

Russia and China are certainly pushing hard on developing AI like the one Google has created. A recent report by OpenAI on “The Malicious Use of Artificial Intelligence” warns that AI is “ripe for exploitation by rogue states, criminals and terrorists.” Rules can, and certainly should be set up for how AI can be used and applied. In the meantime, let’s be thankful that, according to a 2016 U.S. Government Accountability Office report, the U.S. nuclear arsenal still uses 1970s-era computing systems that require “eight-inch floppy disks.”

David Firth is a professor of management information systems in the College of Business at the University of Montana and a faculty fellow with Advanced Technology Group in Missoula.