
Sign up to save your podcasts
Or
One of the things that’s on the verge of excitement and annoyance for me is the way that Artificial Intelligence work has all kind of converged around deep learning. Deep learning is amazing and super powerful, and we’ve gotten a lot out of it, but what it has done is, both attracted a lot of people to Artificial Intelligence, but also, steered all the research efforts away from other approaches into deep learning.
And you could say that makes some sense, because for a long time, we weren’t making a lot of advancement with these other approaches. But the truth is, most of the advancement in artificial intelligence really comes from the growth in computation and our ability to wrangle a lot of computation. If you took that same approach, with other algorithms, you may well get interesting results. If you could apply as much computation to them, the way we have for the large language models.
And so what really excites me is when I find people who are working on other approaches to artificial intelligence. My buddy, Dr. Ryad Benosman, has been working on different approaches to processing data for a long time, primarily in vision. And his worldview is highly neuromorphic. It’s about trying to understand. What is the brain doing with this data? How do I get a computer to do the things that the brain would do? And that’s hard because we don’t know exactly what a brain does, but one of the things we know is the way that eyeballs incorporate the signal that they get, and then try to turn that into something that the brain can put to use and that’s It’s obviously not done the way that deep learning works.
A lot of what Ryad has worked on has been time series machine learning, those are my words, not his, but basically what it means is, trying to process this data in real time in the order that you receive it and piece together, something meaningful.
That’s very applicable to computer vision. I think Ryad has been responsible for spinning about four companies out of labs, to develop these technologies.
Probably the most well-known is called, the Prophesee Camera, which, they developed and then sold to Sony. This is a camera is an event camera. Instead of just taking frames 30 times a second, aggregating all the signal on every pixel and sticking it into a frame, what an event camera can do is look at the signal that’s changing in any pixel , in the sensor, and this is very important for things like sensor fusion going forward. The work he’s done on algorithms to make that possible is super exciting.
Ryad was a professor in France for a long time. Most recently in Pittsburgh and at Carnegie Mellon. He’s been all over the place, on skunkworks at Meta, and now is doing exciting things we can’t talk about, but look, most people never get a chance to hang out with Ryad.
I’m so thrilled that I got to, get him on the podcast and share him with you.
Important LinksDr. Ryad Benosman is professor of Ophthalmology in the University of Pittsburgh School of Medicine. He is also an Adjunct Faculty Member in the Robotics Institute of Carnegie Mellon University. Prior to this appointment, Dr. Benosman was a full professor at Université Pierre et Marie Curie, Institut de la Vision, in France.
He is curently Director of Research at Meta (Neuromorphic and Event based Sensing and Computation)
He has worked on Event-based (Neuromorphic) Sensing and Computation, applied to develop novel Brain Inspired Machine Learning. His lab used to be the home of the event-based neuromorphic silicon retina ATIS and several other neuromorphic AI related platforms. He also has worked on brain implants and retina prosthetics and optogenetics stimulation.
5
1919 ratings
One of the things that’s on the verge of excitement and annoyance for me is the way that Artificial Intelligence work has all kind of converged around deep learning. Deep learning is amazing and super powerful, and we’ve gotten a lot out of it, but what it has done is, both attracted a lot of people to Artificial Intelligence, but also, steered all the research efforts away from other approaches into deep learning.
And you could say that makes some sense, because for a long time, we weren’t making a lot of advancement with these other approaches. But the truth is, most of the advancement in artificial intelligence really comes from the growth in computation and our ability to wrangle a lot of computation. If you took that same approach, with other algorithms, you may well get interesting results. If you could apply as much computation to them, the way we have for the large language models.
And so what really excites me is when I find people who are working on other approaches to artificial intelligence. My buddy, Dr. Ryad Benosman, has been working on different approaches to processing data for a long time, primarily in vision. And his worldview is highly neuromorphic. It’s about trying to understand. What is the brain doing with this data? How do I get a computer to do the things that the brain would do? And that’s hard because we don’t know exactly what a brain does, but one of the things we know is the way that eyeballs incorporate the signal that they get, and then try to turn that into something that the brain can put to use and that’s It’s obviously not done the way that deep learning works.
A lot of what Ryad has worked on has been time series machine learning, those are my words, not his, but basically what it means is, trying to process this data in real time in the order that you receive it and piece together, something meaningful.
That’s very applicable to computer vision. I think Ryad has been responsible for spinning about four companies out of labs, to develop these technologies.
Probably the most well-known is called, the Prophesee Camera, which, they developed and then sold to Sony. This is a camera is an event camera. Instead of just taking frames 30 times a second, aggregating all the signal on every pixel and sticking it into a frame, what an event camera can do is look at the signal that’s changing in any pixel , in the sensor, and this is very important for things like sensor fusion going forward. The work he’s done on algorithms to make that possible is super exciting.
Ryad was a professor in France for a long time. Most recently in Pittsburgh and at Carnegie Mellon. He’s been all over the place, on skunkworks at Meta, and now is doing exciting things we can’t talk about, but look, most people never get a chance to hang out with Ryad.
I’m so thrilled that I got to, get him on the podcast and share him with you.
Important LinksDr. Ryad Benosman is professor of Ophthalmology in the University of Pittsburgh School of Medicine. He is also an Adjunct Faculty Member in the Robotics Institute of Carnegie Mellon University. Prior to this appointment, Dr. Benosman was a full professor at Université Pierre et Marie Curie, Institut de la Vision, in France.
He is curently Director of Research at Meta (Neuromorphic and Event based Sensing and Computation)
He has worked on Event-based (Neuromorphic) Sensing and Computation, applied to develop novel Brain Inspired Machine Learning. His lab used to be the home of the event-based neuromorphic silicon retina ATIS and several other neuromorphic AI related platforms. He also has worked on brain implants and retina prosthetics and optogenetics stimulation.
1,276 Listeners
8,513 Listeners
26,286 Listeners
1,009 Listeners
8,385 Listeners