
Sign up to save your podcasts
Or


CEO & President Kyle and Graphic Designer & Brand strategist Kelsey explore how prompting has evolved from using AI like a “smarter Google” to structured strategies that deliver sharper, less generic results.
They break down the CRIT framework (Context, Role, Interview, Task), share why detailed context reduces hallucinations, and explain how prompt libraries and model memory speed up repeatable work. The conversation also dives into context engineering with tools like Microsoft 365 Copilot and Google Workspace Gemini to make AI outputs more relevant and secure.
Plus: common prompting mistakes, model comparisons, multimodal inputs, and how to onboard teams without losing brand consistency.
Listen now to level up how you work with AI.
00:00 Prompting Then vs Now: From “Smarter Google” to Strategic Skill
00:39 Why AI Sounds Vanilla: Averages, Models & AI Slop
01:33 Prompt Engineering & the CRIT Framework
02:35 Interview-Style Prompts: Fewer Hallucinations, Better Results
04:10 Garbage In, Garbage Out: Treat AI Like a New Hire
05:04 Let AI Help Write Prompts + Tools & Libraries
07:08 Why One-Liners Fall Flat (Contractor Analogy)
07:55 From Prompts to Systems: Templates & Model Memory
11:21 Context Engineering: Files, Memory & Workplace Data (Copilot/Gemini)
13:27 Over-Prompting: Context Limits & When to Reset
16:26 Set Outcomes, Don’t Micromanage
18:22 Smarter Models: Gemini & Claude Need Less Steering
19:06 Claude Opus vs ChatGPT: Speed vs Detail
20:27 Multi-Model Workflow: Use Each for Its Strength
21:20 Why New Models Feel Smarter
22:11 Ask AI to Improve Your Prompts
24:42 Planning Mode: Structured Builds & AI Interviews
26:13 Training Teams: Frameworks, SOPs & Safe Experimentation
31:47 Multimodal & Voice Prompting (Gemini’s Edge)
33:15 Wrap-Up & What’s Next
By Computer Integration Technologies (CIT)CEO & President Kyle and Graphic Designer & Brand strategist Kelsey explore how prompting has evolved from using AI like a “smarter Google” to structured strategies that deliver sharper, less generic results.
They break down the CRIT framework (Context, Role, Interview, Task), share why detailed context reduces hallucinations, and explain how prompt libraries and model memory speed up repeatable work. The conversation also dives into context engineering with tools like Microsoft 365 Copilot and Google Workspace Gemini to make AI outputs more relevant and secure.
Plus: common prompting mistakes, model comparisons, multimodal inputs, and how to onboard teams without losing brand consistency.
Listen now to level up how you work with AI.
00:00 Prompting Then vs Now: From “Smarter Google” to Strategic Skill
00:39 Why AI Sounds Vanilla: Averages, Models & AI Slop
01:33 Prompt Engineering & the CRIT Framework
02:35 Interview-Style Prompts: Fewer Hallucinations, Better Results
04:10 Garbage In, Garbage Out: Treat AI Like a New Hire
05:04 Let AI Help Write Prompts + Tools & Libraries
07:08 Why One-Liners Fall Flat (Contractor Analogy)
07:55 From Prompts to Systems: Templates & Model Memory
11:21 Context Engineering: Files, Memory & Workplace Data (Copilot/Gemini)
13:27 Over-Prompting: Context Limits & When to Reset
16:26 Set Outcomes, Don’t Micromanage
18:22 Smarter Models: Gemini & Claude Need Less Steering
19:06 Claude Opus vs ChatGPT: Speed vs Detail
20:27 Multi-Model Workflow: Use Each for Its Strength
21:20 Why New Models Feel Smarter
22:11 Ask AI to Improve Your Prompts
24:42 Planning Mode: Structured Builds & AI Interviews
26:13 Training Teams: Frameworks, SOPs & Safe Experimentation
31:47 Multimodal & Voice Prompting (Gemini’s Edge)
33:15 Wrap-Up & What’s Next