Mr. Fred's Tech Talks

Large Language Models: Teaching the Parrot to Talk | AI Series Pt. 2


Listen Later

In Episode 7 of Mr. Fred’s Tech Talks, I dive deeper into Large Language Models (LLMs) and explore how they’re trained. Using the fun analogy of a parrot that never stops practicing, Mr. Fred explains the 9-step training pipeline: from collecting massive datasets and tokenizing text, to neurons, weights, backpropagation, GPUs, fine-tuning, and safety alignment...but in a LOW TECH JARGON way.


I’ll also talk about probability math, why LLMs don’t really “understand” but instead predict the most likely next word, like rolling loaded dice. Along the way, enjoy some nostalgic sound bites from movies and TV that connect the dots between memory, patterns, and AI.


🎧 Highlights:

  • The parrot analogy for LLMs
  • What AI “neurons” are (tiny math functions, not brain cells)
  • Why data quality and fine-tuning matter
  • Probability explained with dice and jokes
  • Tech Tip: Ask AI how it got its answer


Whether you’re a parent, teacher, student, or just curious about AI, this episode will give you a fun and clear view of how language models actually learn.

CONNECT

Website: https://www.getmecoding.com

Courses: https://courses.getmecoding.com


FOLLOW

YouTube: https://www.youtube.com/@GetMeCoding

Instagram: https://www.instagram.com/getmecoding

Facebook: https://www.facebook.com/GetMeCoding

LinkedIn: https://www.linkedin.com/in/mrfred77/

Follow, rate ★★★★★, and share!

Hosted on Acast. See acast.com/privacy for more information.

...more
View all episodesView all episodes
Download on the App Store

Mr. Fred's Tech TalksBy Fred Aebli