This post was originally published on this site
Geoffrey Hinton is one of the creators of Deep Learning, a 2019 winner of the Turing Award, and an engineering fellow at Google. Last week, at the company’s I/O developer conference, we discussed his early fascination with the brain, and the possibility that computers could be modeled after its neural structure—an idea long dismissed by other scholars as foolhardy. We also discussed consciousness, his future plans, and whether computers should be taught to dream. The conversation has been lightly edited for length and clarity.
Nicholas Thompson: Let’s start when you write some of your early, very influential papers. Everybody says, “This is a smart idea, but we’re not actually going to be able to design computers this way.” Explain why you persisted and why you were so confident that you had found something important.
Geoffrey Hinton: It seemed to me there’s no other way the brain could work. It has to work by learning the strength of connections. And if you want to make a device do something intelligent, you’ve got two options: You can program it, or it can learn. And people certainly weren’t programmed, so we had to learn. This had to be the right way to go.
NT: Explain what neural networks are. Explain the original insight.
GH: You have relatively simple processing elements that are very loosely models of neurons. They have connections coming in, each connection has a weight on it, and that weight can be changed through learning. And what a neuron does