AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Prompt Engineering Best Practices: Hack and Track [AI Today Podcast]

04.26.2024 - By AI & Data TodayPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Experimenting, testing, and refining your prompts are essential. The journey to crafting the perfect prompt often involves trying various strategies to discover what works best for your specific needs. A best practice is to constantly experiment, practice, and try new things using an approach called “hack and track”. This is where you use a spreadsheet or other method to track what prompts work well as you experiment. In this episode of AI Today hosts Kathleen Walch and Ron Schmelzer discuss hack and track in detail.

Keeping track of prompts

It's rare to get the desired response on your first attempt. An iterative process of testing different prompts, analyzing the responses, and then tweaking your approach allows you to gradually hone your technique. Another challenge is that LLMs are constantly evolving. The performance of LLMs is very much domain and task dependent, and the performance will change over time. A current prompting best practice is to use a spreadsheet or other method to track what prompts work well as you experiment.

How to set up your Hack and Track Spreadsheet

Keeping track of prompts that work best for you in which situations, including which LLMs are providing the best results for you at that time, can be incredibly helpful for your colleagues as well. There are many LLMs, and at any one particular time, one LLM may perform better than another in a given situation. Without keeping track of prompts you've written and tested, it's hard to have others try to use these prompts themselves.

When creating a spreadsheet to keep track of prompts, the details matter. Every spreadsheet may be set up a little differently but you'll want to include some essentials. Criteria you can use when setting up your hack and track sheet include: Name of the task or query, Prompt Pattern(s) used, LLM used, date last used for this prompt, Prompt chaining approach used if any, and maybe the person or group that created the prompt.

Kathleen and Ron discuss their own experiences with hack and track in this episode and how learning from others is so critical. By seeing how others are writing prompts it helps you get creative an think of ways to use LLMs you may never have thought of. It also lets you see how others at your organization are writing prompts and the results they are having.

Show Notes:

Free Intro to CPMAI course

CPMAI Certification

Subscribe to Cognilytica newsletter on LinkedIn

Properly Scoping AI Projects [AI Today Podcast]

Prompt Engineering Best Practices: What is Prompt Chaining? [AI Today Podcast]

Prompt Engineering Best Practices: Using Custom Instructions [AI Today Podcast]

More episodes from AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion