
Sign up to save your podcasts
Or


Researchers at The University of Texas at Austin have developed a method called "machine unlearning" to remove copyright-protected and violent content from generative AI models. This approach allows for the active blocking and removal of undesirable content without starting from scratch. Traditionally, the process required manually removing data and retraining the model. The new method is significant for ensuring responsible use and making generative AI models commercially viable. The research focuses on image-based generative AI models and incorporates human teams in content moderation and removal. This research opens up possibilities for addressing copyrighted and violent content in generative AI models.
By Dr. Tony Hoang4.6
99 ratings
Researchers at The University of Texas at Austin have developed a method called "machine unlearning" to remove copyright-protected and violent content from generative AI models. This approach allows for the active blocking and removal of undesirable content without starting from scratch. Traditionally, the process required manually removing data and retraining the model. The new method is significant for ensuring responsible use and making generative AI models commercially viable. The research focuses on image-based generative AI models and incorporates human teams in content moderation and removal. This research opens up possibilities for addressing copyrighted and violent content in generative AI models.

91,048 Listeners

32,107 Listeners

229,146 Listeners

1,100 Listeners

341 Listeners

56,532 Listeners

155 Listeners

8,961 Listeners

2,066 Listeners

9,899 Listeners

502 Listeners

1,862 Listeners

78 Listeners

268 Listeners

4,240 Listeners