In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet. It ushered in an exciting phase for computer vision, as it became clear that a model trained using ImageNet could help tackle all sorts of image-recognition problems. Six years later, that’s helped pave the way for self-driving cars to navigate city streets and Facebook to automatically tag people in your photos.
In other arenas of AI research, like understanding language, similar models have proved elusive. But recent research from fast.ai, OpenAI, and the Allen Institute for AI suggests a potential breakthrough, with more robust language models that can help researchers tackle a range of unsolved problems. Sebastian Ruder, a researcher behind one of the new models, calls it his field’s “ImageNet moment.”
The improvements can be dramatic. The most widely tested model, so far, is called Embeddings from Language Models, or ELMo. When it was released by the Allen Institute this spring, ELMo swiftly toppled previous bests on a variety of challenging tasks—like reading comprehension, where an AI answers SAT-style questions about a passage, and sentiment analysis. In a field where progress tends to be incremental, adding ELMo improved results by as much as 25 percent. In June, it was awarded best paper at a major conference.
Dan Klein, a professor of computer science at UC Berkeley, was among the early adopters. He and a student were at work on
Read more here: https://www.wired.com/story/ai-can-recognize-images-but-understand-headline