This paper describes a novel method called Multi-expert Prompting that aims to improve the reliability, safety, and usefulness of large language models (LLMs). The method simulates multiple experts with different areas of expertise and aggregates their responses to a query, ultimately selecting the best answer based on criteria like truthfulness, factuality, and informativeness. This process is inspired by the Nominal Group Technique, a human-designed decision-making framework. The authors demonstrate that Multi-expert Prompting significantly outperforms existing prompting methods, especially in scenarios where diverse perspectives are valuable, and surpasses prior methods on various benchmarks. The paper also discusses ethical considerations related to the potential for bias amplification and explores ways to mitigate these risks.