
Sign up to save your podcasts
Or
This research investigates the strategic capabilities of large language models (LLMs) in scenarios requiring information control. It introduces a game called "The Chameleon," where LLMs must conceal, reveal, and infer information to succeed as either a chameleon or a non-chameleon player. The study combines theoretical analysis with empirical results from LLMs like GPT-4 and Gemini 1.5. The findings reveal that LLMs struggle to conceal information, often revealing too much and underperforming compared to theoretical benchmarks. This weakness makes them less suitable for strategic interactions involving informational asymmetry. The study validates this by using web search counts to demonstrate information leakage through LLM responses.
This research investigates the strategic capabilities of large language models (LLMs) in scenarios requiring information control. It introduces a game called "The Chameleon," where LLMs must conceal, reveal, and infer information to succeed as either a chameleon or a non-chameleon player. The study combines theoretical analysis with empirical results from LLMs like GPT-4 and Gemini 1.5. The findings reveal that LLMs struggle to conceal information, often revealing too much and underperforming compared to theoretical benchmarks. This weakness makes them less suitable for strategic interactions involving informational asymmetry. The study validates this by using web search counts to demonstrate information leakage through LLM responses.