Into AI Safety

INTERVIEW: Polysemanticity w/ Dr. Darryl Wright


Listen Later

Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of neural networks.

Check out a diagram of the decoder task used for our research!

01:46 - Interview begins
02:14 - Supernovae classification
08:58 - Penalizing polysemanticity
20:58 - Our "toy model"
30:06 - Task description
32:47 - Addressing hurdles
39:20 - Lessons learned

Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.

  • Zooniverse
  • BlueDot Impact
  • AI Safety Support
  • Zoom In: An Introduction to Circuits
  • MNIST dataset on PapersWithCode
  • Clusterability in Neural Networks
  • CIFAR-10 dataset
  • Effective Altruism Global
  • CLIP (blog post)
  • Long Term Future Fund
  • Engineering Monosemanticity in Toy Models
...more
View all episodesView all episodes
Download on the App Store

Into AI SafetyBy Jacob Haimes