
Sign up to save your podcasts
Or


Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.
Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.
Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.
Hosted on Acast. See acast.com/privacy for more information.
By Springer Nature Limited4.5
716716 ratings
Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.
Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.
Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.
Hosted on Acast. See acast.com/privacy for more information.

1,384 Listeners

615 Listeners

946 Listeners
0 Listeners

16 Listeners

4 Listeners

524 Listeners

963 Listeners

426 Listeners

415 Listeners

823 Listeners

6,356 Listeners

346 Listeners

355 Listeners

483 Listeners

6,362 Listeners

112 Listeners

491 Listeners