
Sign up to save your podcasts
Or


Bill and Kurtis dig into the real differences between chatbots and large language models, why prompts matter, and how memory changes the AI experience. They unpack the risks of “vibe coding” after a dating app leak exposed thousands of IDs, explore hidden biases baked into models, and debate how much personal data is safe to share with AI. Along the way, they balance the excitement of new agentic tools with the uncomfortable realities of bias, security, and jobs in a world where 40% of Microsoft’s code is already AI-generated.
By Bill Fowler, Kurtis CicaloBill and Kurtis dig into the real differences between chatbots and large language models, why prompts matter, and how memory changes the AI experience. They unpack the risks of “vibe coding” after a dating app leak exposed thousands of IDs, explore hidden biases baked into models, and debate how much personal data is safe to share with AI. Along the way, they balance the excitement of new agentic tools with the uncomfortable realities of bias, security, and jobs in a world where 40% of Microsoft’s code is already AI-generated.