Our research is centered on empirical research with LLMs. If you are conducting similar research, these tips and tools may help streamline your workflow and increase experiment velocity. We are also releasing two repositories to promote sharing more tooling within the AI safety community.
John Hughes is an independent alignment researcher working with Ethan Perez and was a MATS mentee in the Summer of 2023. In Ethan's previous writeup on research tips, he explains the criteria that strong collaborators often have, and he puts 70% weight on "getting ideas to work quickly." Part of being able to do this is knowing what tools there are at your disposal.
This post, written primarily by John, shares the tools and principles we both use to increase our experimental velocity. Many readers will already know much of this, but we wanted to be comprehensive, so it is a good resource for new [...]
---
Outline:
(01:20) Quick Summary
(02:24) Part 1: Workflow Tips
(02:29) Terminal
(07:16) Integrated Development Environment (IDE)
(11:47) Git, GitHub and Pre-Commit Hooks
(13:44) Part 2: Useful Tools
(14:08) Software/Subscriptions
(17:26) LLM Tools
(22:02) LLM Providers
(23:13) Command Line and Python Packages
(24:37) Part 3: Experiment Tips
(24:42) De-risk and extended project mode
(27:54) Tips for both modes
(32:56) Tips for extended project mode
(37:53) Part 4: Shared AI Safety Tooling Repositories
(39:24) Repo 1: safety-tooling
(40:24) Repo 2: safety-examples
(41:33) Acknowledgements
---