Data Science at Home

[RB] Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 83)

10.18.2019 - By Francesco GadaletaPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Join the discussion on our Discord server

 

In this episode, I am with Aaron Gokaslan, computer vision researcher, AI Resident at Facebook AI Research. Aaron is the author of OpenGPT-2, a parallel NLP model to the most discussed version that OpenAI decided not to release because too accurate to be published.

We discuss about image-to-image translation, the dangers of the GPT-2 model and the future of AI. Moreover, Aaron provides some very interesting links and demos that will blow your mind!

Enjoy the show! 

References

Multimodal image to image translation (not all mentioned in the podcast but recommended by Aaron)

Pix2Pix: 

https://phillipi.github.io/pix2pix/

 

CycleGAN:

https://junyanz.github.io/CycleGAN/

 

GANimorph

Paper: https://arxiv.org/abs/1808.04325

Code: https://github.com/brownvc/ganimorph

 

UNIT:https://arxiv.org/abs/1703.00848

MUNIT:https://github.com/NVlabs/MUNIT

DRIT: https://github.com/HsinYingLee/DRIT

 

GPT-2 and related 

Try OpenAI's GPT-2: https://talktotransformer.com/

Blogpost: https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc

The Original Transformer Paper: https://arxiv.org/abs/1706.03762

Grover: The FakeNews generator and detector: https://rowanzellers.com/grover/

More episodes from Data Science at Home