This research explores the effectiveness of different methods for automatically optimizing prompts for large language models (LLMs). The paper compares and contrasts two main approaches: instruction optimization (IO) and exemplar selection (ES). The authors find that ES methods, which select relevant examples to guide model behavior, often outperform IO methods, which focus on refining instructions. The research suggests that optimizing exemplars alone can surpass optimizing instructions, and that combining both strategies yields the best results. Moreover, the paper highlights the importance of considering ES as a standalone method and its potential to improve prompt engineering.