
Sign up to save your podcasts
Or


If your organization ran an "AI 101" lunch-and-learn… and nothing changed after, this episode is for you. Host Susan Diaz explains why one-off workshops create false confidence, how AI literacy is more like learning a language than learning software buttons, and shares a practical roadmap to build sustainable AI capability.
Episode summaryThis episode is for two groups:
teams who did a single AI training and still feel behind, and
leaders realizing one workshop won't build organizational capability.
The core idea is simple: AI adoption isn't a "feature learning" problem. It's a behaviour change problem. Behaviour only sticks when there's a container - cadence, guardrails, and a community of practice that turns curiosity into repeatable habits.
Susan breaks down why one-off training fails, what good training looks like (a floor, not a ceiling), and gives a step-by-step plan you can use to design an internal program - even if your rollout already happened and it was messy.
Key takeawaysOne-off AI training creates false confidence. People leave either overconfident (shipping low-quality output) or intimidated (deciding "AI isn't for me"). Neither leads to real adoption.
AI literacy is a language, not a feature. Traditional software training teaches buttons and steps. AI requires reps, practice, play, and continuous learning because the tech and use cases evolve constantly.
Access is not enablement. Buying licences and calling everyone "AI-enabled" skips the hard part: safe use, permissions, and real workflow practice. Handing out tools with no written guardrails is a risk, not a training plan.
Cadence beats intensity. Without rituals and follow-up, people drift back to business as usual. AI adoption backslides unless you design ongoing reinforcement.
Good training builds a floor, not a ceiling. A floor means everyone can participate safely, speak shared language, and contribute use cases—without AI becoming a hero-only skill.
The four layers of training that sticks:
Safety + policy (permission, guardrails, what data is allowed)
Shared language (vocabulary, mental models)
Workflow practice (AI on real work, not toy demos)
Reinforcement loop (office hours, champions, consistent rituals)
The 5-step "training that works" roadmap
Step 1: Define a 60-day outcome. "In 60 days, AI will help our team ____." Choose one: reduce cycle time, improve quality, reduce risk, improve customer response, improve decision-making. Then: "We'll know it worked when ____."
Step 2: Set guardrails and permissions. List:
data never allowed
data allowed with caution
data safe by default
Step 3: Pick 3 high-repetition workflows. Weekly tasks like proposals, client summaries, internal comms, research briefs. Circle one that's frequent + annoying + low risk. That becomes your practice lane.
Step 4: Build the loop (reps > theory). Bring one real task. Prompt once for an ugly first draft. Critique like an editor. Re-prompt to improve. Share a before/after with the team.
Step 5: Create a community of practice. Office hours. An internal channel for AI wins + FAQs. Two champions per team (curious catalysts, not "experts"). Only rule: bring a real use case and a real question.
What "bad training" looks like
one workshop with no follow-up
generic prompt packs bought off the internet
tools handed out with no written guardrails
hype-based demos instead of workflow practice
no time allocated for learning (so it becomes 10pm homework)
00:00 — Why this episode: "We did AI training… and nothing changed." 01:20 — One-off training creates two bad outcomes: overconfident or intimidated 03:05 — AI literacy is a language, not a software feature 05:10 — Access isn't enablement: licences without guardrails = risk 07:00 — Cadence beats intensity: why adoption backslides 08:40 — Training should build a floor, not a ceiling 10:05 — The 4 layers: policy, shared language, workflow practice, reinforcement 12:10 — The 5-step roadmap: define a 60-day outcome 13:40 — Guardrails and permissions (what data is never allowed) 15:10 — Pick 3 workflows and choose a low-risk practice lane 16:30 — The loop: prompt → critique → re-prompt → share 18:10 — Communities of practice: office hours + champions 20:05 — What to do this week: pick one workflow and run one loop
If your organization did an AI 101 and nothing changed, don't panic.
Pick one workflow this week. Run the prompt → critique → re-prompt → share loop once. Then schedule an office hour to do it again.
That's how you move from "we did a training" to "we're building capability".
Connect with Susan Diaz on LinkedIn to get a conversation started.
Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
By Northlight AI5
22 ratings
If your organization ran an "AI 101" lunch-and-learn… and nothing changed after, this episode is for you. Host Susan Diaz explains why one-off workshops create false confidence, how AI literacy is more like learning a language than learning software buttons, and shares a practical roadmap to build sustainable AI capability.
Episode summaryThis episode is for two groups:
teams who did a single AI training and still feel behind, and
leaders realizing one workshop won't build organizational capability.
The core idea is simple: AI adoption isn't a "feature learning" problem. It's a behaviour change problem. Behaviour only sticks when there's a container - cadence, guardrails, and a community of practice that turns curiosity into repeatable habits.
Susan breaks down why one-off training fails, what good training looks like (a floor, not a ceiling), and gives a step-by-step plan you can use to design an internal program - even if your rollout already happened and it was messy.
Key takeawaysOne-off AI training creates false confidence. People leave either overconfident (shipping low-quality output) or intimidated (deciding "AI isn't for me"). Neither leads to real adoption.
AI literacy is a language, not a feature. Traditional software training teaches buttons and steps. AI requires reps, practice, play, and continuous learning because the tech and use cases evolve constantly.
Access is not enablement. Buying licences and calling everyone "AI-enabled" skips the hard part: safe use, permissions, and real workflow practice. Handing out tools with no written guardrails is a risk, not a training plan.
Cadence beats intensity. Without rituals and follow-up, people drift back to business as usual. AI adoption backslides unless you design ongoing reinforcement.
Good training builds a floor, not a ceiling. A floor means everyone can participate safely, speak shared language, and contribute use cases—without AI becoming a hero-only skill.
The four layers of training that sticks:
Safety + policy (permission, guardrails, what data is allowed)
Shared language (vocabulary, mental models)
Workflow practice (AI on real work, not toy demos)
Reinforcement loop (office hours, champions, consistent rituals)
The 5-step "training that works" roadmap
Step 1: Define a 60-day outcome. "In 60 days, AI will help our team ____." Choose one: reduce cycle time, improve quality, reduce risk, improve customer response, improve decision-making. Then: "We'll know it worked when ____."
Step 2: Set guardrails and permissions. List:
data never allowed
data allowed with caution
data safe by default
Step 3: Pick 3 high-repetition workflows. Weekly tasks like proposals, client summaries, internal comms, research briefs. Circle one that's frequent + annoying + low risk. That becomes your practice lane.
Step 4: Build the loop (reps > theory). Bring one real task. Prompt once for an ugly first draft. Critique like an editor. Re-prompt to improve. Share a before/after with the team.
Step 5: Create a community of practice. Office hours. An internal channel for AI wins + FAQs. Two champions per team (curious catalysts, not "experts"). Only rule: bring a real use case and a real question.
What "bad training" looks like
one workshop with no follow-up
generic prompt packs bought off the internet
tools handed out with no written guardrails
hype-based demos instead of workflow practice
no time allocated for learning (so it becomes 10pm homework)
00:00 — Why this episode: "We did AI training… and nothing changed." 01:20 — One-off training creates two bad outcomes: overconfident or intimidated 03:05 — AI literacy is a language, not a software feature 05:10 — Access isn't enablement: licences without guardrails = risk 07:00 — Cadence beats intensity: why adoption backslides 08:40 — Training should build a floor, not a ceiling 10:05 — The 4 layers: policy, shared language, workflow practice, reinforcement 12:10 — The 5-step roadmap: define a 60-day outcome 13:40 — Guardrails and permissions (what data is never allowed) 15:10 — Pick 3 workflows and choose a low-risk practice lane 16:30 — The loop: prompt → critique → re-prompt → share 18:10 — Communities of practice: office hours + champions 20:05 — What to do this week: pick one workflow and run one loop
If your organization did an AI 101 and nothing changed, don't panic.
Pick one workflow this week. Run the prompt → critique → re-prompt → share loop once. Then schedule an office hour to do it again.
That's how you move from "we did a training" to "we're building capability".
Connect with Susan Diaz on LinkedIn to get a conversation started.
Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

643 Listeners