Welcome to the Pieces AI productivity podcast, where we dig into how experts use AI to be more productive, as well as geeking out in general on different AI topics.
In this episode, Jim geeks out with Jason Arbon, a tester who has worked on Bing search and Google Chrome. Jason is now the CEO and founder of Testers.ai and Checkie.ai, two services that use AI Agents to test your products.
Jim and Jason dive into how AI can make testers more productive, looking at how AI can create user personas and use these to review websites with a higher degree of empathy than humans, how sometimes giving the AI less context is better so it doesn’t hallucinate, and how it’s likely AI will take over all software development in the next 2 years, so we should all make as much money now as possible before the inevitable singularity.
🌐 Links:
Connect with Jason on LinkedIn
Follow Jason on X
Testers.ai
Checkie.ai
👉 Try Pieces for free: https://pieces.app
💡 Learn more about Pieces features: https://pieces.app/features
Connect with Pieces:
X: https://x.com/getpieces
Bluesky: https://bsky.app/profile/getpieces.bsky.social
LinkedIn: https://www.linkedin.com/company/getpieces/
Instagram: https://www.instagram.com/getpieces/
Discord: https://pieces.app/discord
In this episode we cover:
0:00:00 - Intro
0:01:00 - Capturing context from everything you do, and how do you use it well?
0:03:40 - Humans expectations of AI have gone to infinite
0:10:00 - Selling an ATM to a teller, how can you sell AI to folks whose job will be replaced by AI
0:11:01 - Is AI better than humans at user empathy? Yes!
0:14:20 - Creating user personas with AI and testing your site with them
0:21:57 - LLMs can work better if your prompts are less detailed
0:26:30 - “What didn’t I think off” as a follow-on prompt
0:28:15 - How do we deal with our ego around AI being smarter than us
0:30:30 - How will humans learn to review AI generated code if we don’t learn to code as AI does it all
0:34:50 - AI generated code, regulatory compliance, and ensuring code quality
0:39:55 - Hacks via bad LLM suggestions and validating the responses for security and correctness with multiple LLMs
0:47:55 - Human-in-the-loop for quality validation
0:49:40 - AI can continuously test, giving faster feedback before, during, and after a sprint
0:52:20 - The biggest blocker in testing is waiting for requirements specs, but AI can generate these
0:57:36 - Where do humans fit into an AI future?