What if I told you that it was ChatGPT, not I, who wrote each and every one of these scintillating episode descriptions? Well, you'd probably laugh uncontrollably at my hilarious joke. Robots can't use the word "scintillating" correctly—or can they? Whether we like it or not, linguistically conscious AI are becoming more and more prevalent. In light of the decline in actual writing, I thought it would be prudent to interview the brilliant, funny, talented computer scientist and computational linguist, Ellie Pavlick. In addition to teaching at Brown University, Professor Pavlick is a research scientist at Google AI. We talk about natural language processing, pre-trained models, the importance of training models to both understand language form (syntax) and language meaning (semantics), and all that's still unknown about the role of language in neural nets. Noam Chomsky gets a shoutout (how could he not?), as does ChatGPT, prejudice in pre-trained models, and a few different philosophical thoughts on how teaching and writing and learning will evolve in the wake of excellent natural language processing models. Curious about Ellie Pavlick's research? Check out the links below. Questions, comments, or suggestions for the podcast? Email [email protected]