TechcraftingAI NLP

Ep. 250 - May 31, 2024


Listen Later

ArXiv NLP research summaries for May 31, 2024.


00:20 FineRadScore: A Radiology Report Line-by-Line Evaluation Technique Generating Corrections with Severity Scores

01:37 Leveraging Large Language Models for Entity Matching

02:27 Reward-based Input Construction for Cross-document Relation Extraction

03:40 Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language Models

05:04 DORY: Deliberative Prompt Recovery for LLM

06:18 Unveiling the Lexical Sensitivity of LLMs: Combinatorial Optimization for Prompt Enhancement

07:35 It is Simple Sometimes: A Study On Improving Aspect-Based Sentiment Analysis Performance

08:59 FinGen: A Dataset for Argument Generation in Finance

09:42 Improving code-mixed hate detection by native sample mixing: A case study for Hindi-English code-mixed scenario

11:26 Multilingual Text Style Transfer: Datasets & Models for Indian Languages

13:01 An iterated learning model of language change that mixes supervised and unsupervised learning

14:01 Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment

15:29 That's Optional: A Contemporary Exploration of "that" Omission in English Subordinate Clauses

16:18 Don't Buy it! Reassessing the Ad Understanding Abilities of Contrastive Multimodal Models

17:20 Improving Reward Models with Synthetic Critiques

18:29 Towards Spoken Language Understanding via Multi-level Multi-grained Contrastive Learning

19:49 clembench-2024: A Challenging, Dynamic, Complementary, Multilingual Benchmark and Underlying Flexible Framework for LLMs as Multi-Action Agents

21:05 A comparison of correspondence analysis with PMI-based word embedding methods

22:05 Large Language Models: A New Approach for Privacy Policy Analysis at Scale

23:36 Preemptive Answer "Attacks" on Chain-of-Thought Reasoning

24:22 Learning to Estimate System Specifications in Linear Temporal Logic using Transformers and Mamba

25:48 OR-Bench: An Over-Refusal Benchmark for Large Language Models

27:20 Superlatives in Context: Explicit and Implicit Domain Restrictions for Superlative Frames

28:41 SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales

30:33 Towards a Fluid computer

31:33 You Only Scan Once: Efficient Multi-dimension Sequential Modeling with LightNet

33:01 LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language Models

35:02 Direct Alignment of Language Models via Quality-Aware Self-Refinement

36:19 Code Pretraining Improves Entity Tracking Abilities of Language Models

...more
View all episodesView all episodes
Download on the App Store

TechcraftingAI NLPBy Brad Edwards