
Sign up to save your podcasts
Or
Ben & Ryan Show Episode 23
In this episode, your hosts Ben Nadel and Ryan Brown are joined by Mark Takata, Senior Technical Evangelist for Adobe ColdFusion, to dive deep into the evolving world of "vibe coding" and how AI is transforming development workflows. The trio explore the tools, mental friction, and cultural shifts developers face when working alongside AI, offering a mix of real-world use cases, developer gripes, and future-gazing insights into how AI might shape team structure and job roles.
Key Points
• Vibe coding is reshaping how developers approach prototyping, testing, and feature iteration using tools like Claude, ChatGPT, and Cursor
• Developers are grappling with trust issues, debugging quirks, and the loss of "good friction" in the coding process
• Cursor, a VS Code-based AI IDE, is emerging as a powerful tool for AI-assisted development and code generation at scale
• AI could reshape team roles, potentially phasing out junior positions and introducing roles like AI coaches or prompt engineers
• The group previews their upcoming live session at the Adobe ColdFusion Summit, where these AI coding dynamics will take center stage
Vibe coding is introduced as a developer trend where AI tools generate code based on high-level input
• Ben shares that his team's lead has spent months learning how to "steer" Claude and manage scope, memory, and output reliability
• Ryan recounts building a Bingo app with ChatGPT, running into scope creep and frustrating regressions with each iteration
• The team discusses the challenge of context loss and how to handle patching or manually adjusting code when working with AI
Mark breaks down his real-world usage of Cursor and how it differs from using ChatGPT
• Developers can open existing projects, give directions, and have Cursor fill in the gaps based on previous patterns
• He highlights how he uses it to rapidly prototype ColdFusion demos, noting an increase in productivity and output
• Cursor allows mixing Claude, GPT, and Check55 models for different task types
• Pro-level versions of Cursor offer metrics, visibility into prompt volume, and sophisticated model orchestration
Ben reflects on his preference for copy-paste workflows over AI generation
• He points out the "non-deterministic" nature of AI and the potential for it to introduce unseen changes across entire files
• The team debates whether AI is removing helpful friction that leads to better abstractions and more thoughtful code
• The trio agrees that AI needs better guardrails and consistency for it to be fully trusted in production codebases
The group explores how AI tools could evolve to meet professional coding standards
• Cursor could serve as a CFQuery-like abstraction for AI tooling, handling the translation between models
• AI-generated code may need standardization through community-driven templates or frameworks like PromptOS
• The role of code review could shift, with AI models doing first-pass reviews and flagging issues for humans
• Review at scale is a concern as AI-generated code grows in complexity and volume
Ryan and Ben explore the idea of AI as a team member
• Mark tells a story of a developer with a briefcase of AIs acting as a full software team
• They discuss how AI may appeal more to managers than ICs due to the loss of individual agency
• Ben shares concerns about how PR (pull request) reviews become unmanageable when AI floods teams with code
• Mark mentions teams are already using multiple AIs to cross-review code before surfacing issues to humans
The conversation shifts to how companies should approach AI adoption
• There's concern about erasing junior roles and losing the traditional developer learning pipeline
• Ryan suggests managers be transparent about whether the goal is augmentation or headcount reduction
• Mark emphasizes the need for AI coaches—specialists who manage prompt libraries, usage patterns, and workflows
Ben & Ryan Show Episode 23
In this episode, your hosts Ben Nadel and Ryan Brown are joined by Mark Takata, Senior Technical Evangelist for Adobe ColdFusion, to dive deep into the evolving world of "vibe coding" and how AI is transforming development workflows. The trio explore the tools, mental friction, and cultural shifts developers face when working alongside AI, offering a mix of real-world use cases, developer gripes, and future-gazing insights into how AI might shape team structure and job roles.
Key Points
• Vibe coding is reshaping how developers approach prototyping, testing, and feature iteration using tools like Claude, ChatGPT, and Cursor
• Developers are grappling with trust issues, debugging quirks, and the loss of "good friction" in the coding process
• Cursor, a VS Code-based AI IDE, is emerging as a powerful tool for AI-assisted development and code generation at scale
• AI could reshape team roles, potentially phasing out junior positions and introducing roles like AI coaches or prompt engineers
• The group previews their upcoming live session at the Adobe ColdFusion Summit, where these AI coding dynamics will take center stage
Vibe coding is introduced as a developer trend where AI tools generate code based on high-level input
• Ben shares that his team's lead has spent months learning how to "steer" Claude and manage scope, memory, and output reliability
• Ryan recounts building a Bingo app with ChatGPT, running into scope creep and frustrating regressions with each iteration
• The team discusses the challenge of context loss and how to handle patching or manually adjusting code when working with AI
Mark breaks down his real-world usage of Cursor and how it differs from using ChatGPT
• Developers can open existing projects, give directions, and have Cursor fill in the gaps based on previous patterns
• He highlights how he uses it to rapidly prototype ColdFusion demos, noting an increase in productivity and output
• Cursor allows mixing Claude, GPT, and Check55 models for different task types
• Pro-level versions of Cursor offer metrics, visibility into prompt volume, and sophisticated model orchestration
Ben reflects on his preference for copy-paste workflows over AI generation
• He points out the "non-deterministic" nature of AI and the potential for it to introduce unseen changes across entire files
• The team debates whether AI is removing helpful friction that leads to better abstractions and more thoughtful code
• The trio agrees that AI needs better guardrails and consistency for it to be fully trusted in production codebases
The group explores how AI tools could evolve to meet professional coding standards
• Cursor could serve as a CFQuery-like abstraction for AI tooling, handling the translation between models
• AI-generated code may need standardization through community-driven templates or frameworks like PromptOS
• The role of code review could shift, with AI models doing first-pass reviews and flagging issues for humans
• Review at scale is a concern as AI-generated code grows in complexity and volume
Ryan and Ben explore the idea of AI as a team member
• Mark tells a story of a developer with a briefcase of AIs acting as a full software team
• They discuss how AI may appeal more to managers than ICs due to the loss of individual agency
• Ben shares concerns about how PR (pull request) reviews become unmanageable when AI floods teams with code
• Mark mentions teams are already using multiple AIs to cross-review code before surfacing issues to humans
The conversation shifts to how companies should approach AI adoption
• There's concern about erasing junior roles and losing the traditional developer learning pipeline
• Ryan suggests managers be transparent about whether the goal is augmentation or headcount reduction
• Mark emphasizes the need for AI coaches—specialists who manage prompt libraries, usage patterns, and workflows