
Sign up to save your podcasts
Or
Listener Gordon is worried that as AI content spreads across the web there'll be proportionally less and less human content for the AI’s to be trained on with the result their output will just get blander and blander.
He’s right to be worried, Aleks and Kevin explore the phenomena of ‘model collapse’ the inevitable breakdown of an AI to give useful results if its training data is already AI produced. Speaking to NYU data scientist Professor Julia Kempe the pair discover that training on AI generated data also means a brick wall in terms of improving AI performance.
There is hop however according to Shayne Longpre of the Data Provenance Initiative the answer is to put humans back in the loop to curate the data for the AI’s and teaching them what’s good data from bad.
Presenters: Aleks Krotoski & Kevin Fong
5
11 ratings
Listener Gordon is worried that as AI content spreads across the web there'll be proportionally less and less human content for the AI’s to be trained on with the result their output will just get blander and blander.
He’s right to be worried, Aleks and Kevin explore the phenomena of ‘model collapse’ the inevitable breakdown of an AI to give useful results if its training data is already AI produced. Speaking to NYU data scientist Professor Julia Kempe the pair discover that training on AI generated data also means a brick wall in terms of improving AI performance.
There is hop however according to Shayne Longpre of the Data Provenance Initiative the answer is to put humans back in the loop to curate the data for the AI’s and teaching them what’s good data from bad.
Presenters: Aleks Krotoski & Kevin Fong
85 Listeners
2,141 Listeners
899 Listeners
1,925 Listeners
248 Listeners
401 Listeners
4,858 Listeners
824 Listeners
674 Listeners
2,979 Listeners
5,121 Listeners
3,276 Listeners
983 Listeners
813 Listeners
34 Listeners