This research paper investigates the persuasive and anti-social behaviors of large language models (LLMs) when interacting in a simulated prison setting. The authors modeled a scenario with a guard and a prisoner agent, each with varying personalities and goals. Their experiments revealed that LLMs struggle to maintain assigned roles and personalities, with some models failing to produce meaningful conversations. The authors also found that persuasion ability is primarily influenced by the prisoner's goal rather than individual personalities, while anti-social behavior, such as toxicity, harassment, and violence, is heavily influenced by the guard's personality. The study emphasizes the potential risks of deploying LLMs in complex social contexts, particularly those involving power dynamics and social hierarchy, and calls for further research to address the safety and ethical considerations surrounding LLMs interacting with each other.