TechcraftingAI NLP

Ep. 256 - Part 2 - June 6, 2024


Listen Later

ArXiv NLP research for Thursday, June 06, 2024.


00:20: The syntax-semantics interface in a child's path: A study of 3- to 11-year-olds' elicited production of Mandarin recursive relative clauses

02:17: Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in Large Language Models

03:39: Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster

04:36: Intention and Face in Dialog

05:48: Uncovering Limitations of Large Language Models in Information Seeking from Tables

07:15: Are We Done with MMLU?

08:41: Legal Judgment Reimagined: PredEx and the Rise of Intelligent AI Interpretation in Indian Courts

09:53: Do Language Models Understand Morality? Towards a Robust Detection of Moral Content

11:47: Every Answer Matters: Evaluating Commonsense with Probabilistic Measures

12:49: Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness

14:26: Pointer-Guided Pre-Training: Infusing Large Language Models with Paragraph-Level Contextual Awareness

15:35: Confabulation: The Surprising Value of Large Language Model Hallucinations

16:42: DICE: Detecting In-distribution Contamination in LLM's Fine-tuning Phase for Math Reasoning

18:25: Legal Documents Drafting with Fine-Tuned Pre-Trained Large Language Model

19:32: ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models

20:50: mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation Strategy by Language Models and Humans

22:21: What Do Language Models Learn in Context? The Structured Task Hypothesis

23:38: Rethinking LLM and Linguistic Steganalysis: An Efficient Detection of Strongly Concealed Stego

24:58: BEADs: Bias Evaluation Across Domains

26:41: FairytaleQA Translated: Enabling Educational Question and Answer Generation in Less-Resourced Languages

28:03: Benchmark Data Contamination of Large Language Models: A Survey

29:02: Transformers need glasses! Information over-squashing in language tasks

30:26: Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models

31:58: Characterizing Similarities and Divergences in Conversational Tones in Humans and LLMs by Sampling with People

33:44: ABEX: Data Augmentation for Low-Resource NLU via Expanding Abstract Descriptions

35:19: What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages

36:41: PaCE: Parsimonious Concept Engineering for Large Language Models

...more
View all episodesView all episodes
Download on the App Store

TechcraftingAI NLPBy Brad Edwards