Yes, millions of low-paying, low-skilled jobs are increasingly at risk. But there’s also much to gain from the coming AI revolution.
OnTuesday, the White House released a chilling report on AI and the economy. It began by positing that “it is to be expected that machines will continue to reach and exceed human performance on more and more tasks,” and it warned of massive job losses.
Yet to counter this threat, the government makes a recommendation that may sound absurd: we have to increase investment in AI. The risk to productivity and the US’s competitive advantage is too high to do anything but double down on it.
This approach not only makes sense, but also is the only approach that makes sense. It’s easy — and justified — to worry about the millions of individual careers that something like self-driving cars and trucks will retool, but we also have chasms of need that machine learning could help fill. Our medical system is deeply flawed; intelligent agents could spread affordable, high-quality healthcare to more people in more places. Our education infrastructure is not adequately preparing students for the looming economic upheaval; here, too, AI systems could chip in where teachers are spread too thin. We might gain energy independence by developing much smarter infrastructure, as Google subsidiary DeepMind did for its parent company’s power usage. The opportunities are too great to ignore.
More important, we have to think beyond narrow classes of threatened jobs, because today’s AI leaders—at Google and elsewhere—are already laying the groundwork for an even more ambitious vision, the former pipe dream that is general artificial intelligence.
To visit the front lines of the great AI takeover is to observe machine learning systems routinely drubbing humans in narrow, circumscribed domains. This year, many of the most visible contestants in AI’s face-off with humanity have emerged from Google. In March, the world’s top Go player weathered a humbling defeat against DeepMind’s AlphaGo. Researchers at DeepMind also produced a system that can lip-read videos with an accuracy that leaves humans in the dust. A few weeks ago, Google computer scientists working with medical researchers reported an algorithm that can detect diabetic retinopathy in images of the eye as well as an ophthalmologist can. It’s an early step toward a goal many companies are now chasing: to assist doctors by automating the analysis of medical scans.
Also this fall, Microsoft unveiled a system that can transcribe human speech with greater accuracy than professional stenographers. Speech recognition is the basis of systems like Cortana, Alexa, and Siri, and matching human performance in this task has been a goal for decades. For Microsoft chief speech scientist XD Huang, “It’s personally almost like a dream come true after 30 years.”
But AI’s 2016 victories over humans are just the beginning. Emerging research suggests we will soon move from these slim slivers of intelligence to something richer and more complex. Though a true general intelligence is at least decades away, society will still see massive change as these systems acquire an ever-widening circle of mastery. That’s why the White House (well, at least while Obama’s still in office) isn’t shrinking from it. We are in the midst of developing a powerful force that will transform everything we do.
To ignore this trend — to not plunge headlong into understanding it, shaping it, monitoring it — might well be the biggest mistake a country could make.
The tool of choice in the aforementioned examples of successful AIs is deep learning: the artificial intelligence technique that’s been rivaling habaneros in blistering hotness. Its special nature is the reason we’re on the brink of a more general intelligence.
Though we’ve been able to train AIs to solve tasks for decades, experts had to painstakingly hand-engineer many bespoke components for every application. The years of human work needed to support an AI in recognizing objects in an image, for example, were totally useless for the problem of deciphering sounds for transcription. In other words, we’ve had to pre-chew our AIs’ food, over and over and over again.
The lesson of the past four years is that this tedious pre-chewing is now, for the moment at least, largely irrelevant. Instead, there’s essentially one algorithm (with many minor variants) that can adjust its own structure to solve a problem, directly from whatever massively large data set you feed it. The result is not only better-performing systems, but also much faster experimentation. “Many, many problems that we labored on for a long time and made very, very halting progress on, now in six months we can basically plow through them,” says Google vice president and engineering fellow Fernando Pereira.
Yet as impressive as human-quality speech recognition, lip reading and image tagging are, it’s not immediately obvious that they’re the cornerstones of some great, all-powerful intelligence. It’s somewhat like having your kid come home with a report card of As in subjects that include English, knitting the heels of socks, dodgeball, and calculating a hypotenuse. You’d likely wonder if this clever kid will be able to draw connections between those areas to emerge as a critical thinker. So is deep learning really on a path to challenging true human intelligence?
“The reason we’re seeing extremely narrow systems right now is because they’re extremely useful,” says Ilya Sutskever, cofounder and research director of OpenAI. “Good translation is extremely useful. Good cancer screening is extremely useful. So that’s what people are going after.”
But he adds that although today’s systems look narrow, we “are already beginning to see the seed of generality.” The reason is that the underlying techniques are all just mild riffs on one concept. “These ideas are so combinable, it’s like clay. You mix and match them and they can all be made to work.”
By mixing and matching the narrow systems of today, we’ll land on something bigger and broader — and more recognizable as intelligent — tomorrow.
One early, tantalizing example of what higher intelligence might eventually look like comes from Google’s translation research. In September, Google announced an enormous upgrade in the performance of Google Translate, using a system it’s calling Google Neural Machine Translation (GNMT). Google’s Pereira called the jump in translation quality “something I never thought I’d see in my working life.”
“We’d been making steady progress,” he added. “This is not steady progress. This is radical.”
With the new Translate now rolling out language by language, some Googlers decided to go even further. They wondered if they could build a single translation system that could juggle many languages and potentially display transfer learning, a hallmark of human intelligence. Transfer learning is the ability to apply one skill, such as playing the piano, to speed up the acquisition of another, such as conducting an orchestra or learning another instrument.
It seems obvious to us that knowing the fundamentals of music would help a pianist pick up the ukulele, but that’s not how language translation has been done. In GNMT, one deep learning system had to absorb millions of German-to-English translations, and teach itself how to take in der rote Hund and spit out the red dog. A separate system independently learned how to translate in the other direction, from English to German. Same goes for French to English, English to French, Korean to Japanese, and so on — every pair of languages uses its own distinct system, built as if the act of translation was being invented anew each time. To support translation between 100 languages, you might end up training almost 10,000 separate systems. That’s time consuming.
These researchers wanted to know if they could build a single model for multiple languages that could hold its own against those one-off systems. First, it might be more efficient. And maybe something interesting would emerge from having all those words and languages jangling around inside a single architecture.
They started small, with a neural network trained on Portuguese and English, and on English and Spanish. So far so good: this single multilingual system did almost as well as the state-of-the-art, dedicated GNMT models in translating between English and either Spanish or Portuguese. Then they wondered: could this algorithm also translate between Portuguese and Spanish — even though it hadn’t seen a single example of Portuguese-Spanish translation?
As they reported in November, the result they got was “reasonably good quality” — not staggering in its perfection, but not bad for a newbie. But when they then fed it a small set of Portuguese-to-Spanish sentence pairs, sort of an amuse bouche of data, the system suddenly became just as good as a dedicated GNMT Portuguese-to-Spanish model. And it worked for other bundles of languages, too. As the Google authors write in the paper, this “is the first time to our knowledge that a form of true transfer learning has been shown to work for machine translation.”
It’s easy to miss what makes this so unusual. This neural net had taught itself a rudimentary new skill using indirect information. It had hardly studied Portuguese-to-Spanish translation, and yet here it was, acing the job. Somewhere in the system’s guts, the authors seemed to see signs of a shared essence of words, a gist of meaning.
Google’s Pereira explains it this way: “The model has a common layer that has to translate from anything to anything. That common layer represents a lot of the meaning of the text, independent of language,” he says. “It’s something we’ve never seen before.”
Of course, this algorithm’s reasoning power is very limited. It doesn’t know that a penguin is a bird, or that Paris is in France. But it’s a sign of what’s to come: an emergening intelligence that can make cognitive leaps based on an incomplete set of examples. If deep learning hasn’t yet defeated you at a skill you care about, just wait. It will.
Training one system to do many things is exactly what it takes to develop a general intelligence, and juicing up that process is now a core focus of AI boosters. Earlier this month OpenAI, the research consortium dreamed up by Elon Musk and Sam Altman, unveiled Universe, an environment for training systems that are not just accomplished at a single task, but that can hop around and become adept at various activities.
As cofounder Sustkever puts it, “If you try to look forward and see what it is exactly we mean by “intelligence,” it definitely involves not just solving one problem, but a large number of problems. But what does it mean for a general agent to be good, to be intelligent? These are not completely obvious questions.”
So he and his team designed Universe as a way to help others measure the general problem-solving abilities of AI agents. It includes about a thousand Atari games, Flash games, and browser tasks. If you were to enter whatever AI you’re building into the training ring that is Universe, it would be equipped with the same tools a human uses to manipulate a computer: a screen on which to observe the action, and a virtual keyboard and mouse.
The intent is for an AI to learn how to navigate one Universe environment, such as Wing Commander III, then apply that experience to quickly get up to speed in the next environment, which could be another game, such as World of Goo, or something as different as Wolfram Mathematica. A successful AI agent would display some transfer learning, with a degree of agility and reasoning.
This approach is not without precedent. In 2013, DeepMind revealed a single deep learning-based algorithm that discovered, on its own, how to play six out of seven Atari games on which it was tested. For three of those games — Breakout, Enduro, and Pong — it outperformed a human expert player. Universe is a sort of scaled-up version of that DeepMind success story.
As Universe grows, AI trainees can start learning innumerable useful computer-related skills. After all, it is essentially a portal into the world of any contemporary desk jockey. The diversity of Universe environments might even allow AI agents to pick up some broad world knowledge that otherwise would be tough to collect.
It’s a bit of a leap from a Flash-and-Atari champion to an agent that improves the quality of healthcare, but that’s because our intelligent systems are still in kindergarten. For many years, AI hadn’t made it even this far. Now it is on the path to first grade, middle school, and eventually, advanced degrees.
Yes, the outcome is uncertain. Yes, it’s totally scary. But we have a choice now. We can try to shut down this murky future that we can neither fully control nor predict, and run the risk that the technology seeps out unbidden, potentially triggering massive displacement. Or we can actively try to guide it to the paths of greatest social gain, and encourage the future we want to see.
I’m with the White House on this one. A deep learning-powered world is coming, and we might as well rush right into it.
What sat you ? Thanks to our friends at Agence for the post !