Grounding through pure language modeling objectives, the origins or probing, the nature of understanding, the future of system assessment, signs of meaningful progress in the field, and having faith in yourself.
Transcript: https://web.stanford.edu/class/cs224u/podcast/pavlick/
Ellie's websiteThe LUNAR LabMIT Scientist Captures 90,000 Hours of Video of His Son’s First Words, Graphs ItMichael FrankSpot robotsDylan EbertIan TenneyWhat do you learn from context? Probing for sentence structure in contextualized word representationsBERT Rediscovers the Classical NLP PipelineJSALT: General-Purpose Sentence Representation LearningSam BowmanSkip thought vectorsWhat you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic propertiesHexCharlie LoveringDesigning and interpreting probes with control tasksJerry FodorBeen KimMycal TuckerWhat if this modified that? Syntactic interventions via counterfactual embeddingsYonatan BelinkovHANS: Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language InferenceConceptual pacts and lexical choice in conversationLocating and editing factual knowledge in GPTCould a purely self-supervised language model achieve grounded language understanding?Dartmouth Summer Research Project on Artificial Intelligence (1956)Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain