Laura Tyson, a prominent economist at UC Berkeley’s Haas School of Business, said Friday that the Trump administration’s recent tax overhaul will fuel the loss of jobs to artificial intelligence by increasing the cost of labor.
“The question is, as more and more intelligent machines do better than humans at more and more jobs, what happens to societies?” Tyson said at a conference at UC Berkeley on the future of artificial intelligence.
Many believe that artificial intelligence – computer systems capable of intelligent behavior, such as visual perception, speech recognition, decision-making, and language translation – will wipe out low- and some medium-skill jobs that require routine tasks, such as computer coding and data processing.
Half of the activities workers are currently paid to do can already be automated with available technology, according to research by Tyson and global consulting firm McKinsey & Company. But deploying artificial intelligence can be too expensive for small- or medium-sized firms, Tyson said.
But the new tax law, which took effect Jan. 1, reduced the cost of capital, encouraging firms to buy more hardware and software instead of hiring humans, according to Tyson.
Tyson’s firm estimates that 400 to 800 million people could be forced to change occupations by 2030, more than at any other time in history.
The lower a job’s skill level and wages, Tyson added, the greater the odds that the job will be automated. Meanwhile, AI will increase wages in highly skilled occupations, such as computer science and engineering, driving income inequality.
“What you can say so far is the technology has favored skilled versus unskilled, the owners of the capital to the workers, because the income from the machines is being captured by the owners of the capital,” Tyson said.
Michael Jordan, an artificial intelligence expert and UC Berkeley professor, shared different concerns about AI. He told the audience that an AI-enabled machine misdiagnosed him with calcium buildup based on data from scores of other patients, recommending a dangerous operation. The machine was more advanced than the ones on which the data had been collected, causing it to flag something that wasn’t there.
According to a 2016 study from Johns Hopkins University, more than 250,000 people in the United States die every year because of medical error, making it the third leading cause of death after heart disease and cancer.
With its ability to quickly synthesize and learn from large amounts of data, AI has the potential to surpass doctors in identifying diseases, reducing the number of medical mistakes and cutting soaring health care costs.
But the machine that diagnosed Jordan didn’t do that.
Jordan didn’t have the operation, “but others did and died that day and every day until the problem was fixed,” he said.
The gravest warning on Friday came from Andrew Critch, a UC Berkeley researcher working on preventing human extinction by AI that has surpassed human intelligence.
Critch is one of a growing number of experts who fear that a superhuman-level artificial intelligence might annihilate humanity. Cosmologist Stephen Hawking warned in 2014 that although the advent of superintelligent machines would be “the biggest event in human history,” it “might also be the last, unless we learn to avoid the risks.”
That same year, Tesla’s Elon Musk declared AI “potentially more dangerous than nukes” on Twitter, and told an MIT symposium that by developing it, “we are summoning the demon.”
For example, a superintelligent system given the task of curing cancer might conclude that the only way to do so is to kill every human being on earth. Free of human control due to its superior intelligence, the system would set about its gruesome task.
“Being smart is what makes humans dominant over other species,” Critch said. “If we hand over control to another [smarter] mechanism, it will by default steer Earth into a different state where by default humans don’t exist.”
Superintelligent AI might not appear for centuries, however.
But Critch said even human-level intelligence, which could arrive within 15 years, has the potential to wreak havoc.
Autonomy, replication speed – how quickly a machine can replicate itself – and the ability to be persuasive are all an AI system needs to threaten humanity, according to Critch.
“Most wars are triggered by words,” he told the audience.
Jordan, however, said it is unlikely that AI systems with the intellectual flexibility and creativity of humans will appear in our lifetimes. “We’re not there at all,” he said.
Instead, we’ll see systems with limited semantic understanding, limited abilities to cope with complex language like metaphor and irony, and limited abilities to reason abstractly or plan in complex environments, according to Jordan.
Right now, he said, artificial intelligence can label objects in visual scenes but it can’t develop a common-sense understanding of a visual scene; it can convert speech to text and text to speech, but it can’t develop a common-sense understanding of an auditory scene; and it can produce “minimally adequate” translation and answers to questions, but it can’t converse.
“These algorithms are so dumb, all they know how to do is search and they don’t even know when they’ve got a good answer,” Jordan said. “They can search all they want, but that doesn’t lead to intelligence.”