Professor at UC Berkeley, Peiter Abbeel joins us. Pieter grew up in Belgium, came to the US and got his PhD in Robotics and Machine Learning from Stanford. He notes that he and Andrew Ng pushed the envelop at the time on how robots learn from humans demonstrations as well as their own trial and error. Peter graduated and came to Berkeley to continue to work on the junction of robotics and learning, machine learning. He’s been focused on end-to-end reinforcement learning, end-to-end imitation learning. Training the neural net end-to-end without specific structure.
Singularity is the notion that a system you build is smart enough to self improve...and things accelerate out of control. How far are we away from this? Pieter notes that 10 years ago computer vision it was difficult to conceive of a solution. Enabling factors and breakthroughs are the keys. Data is an enabling factor. Neural nets are now data driven as opposed to algorithm designed. Will we continue to have more data and can you do things with unlabeled and unstructured data.
Being better at unsupervised learning is a frontier that once reached will open up all sorts of possibilities. One question to answer in understanding where we might be in relation to singularity is how many compute cycles were effectively used to go from where we were 5 Billion years ago to where we are now. Do we think we can short cut this? Or will we need the same amount of compute to get to singularity? Pieter doesn’t have the answers. Yet.
Both he and the artificial intelligence research community and industry need the best possible global talent to answer those very questions.