
Sign up to save your podcasts
Or


👉 Grab a free project to learn about RAG in a practical way. 👈
Last year, I quit an AI startup where I helped build a multimodal RAG system and the agent layer of a production web app. Before leaving, I designed the interview process to hire my replacement — and the results were surprising.
In this video I break down what companies actually expect from an AI engineer, and why the role is usually less about training models and more about integrating LLMs into real products.
Most “AI engineer” jobs are really full-stack developers building AI-powered systems — agents, workflows, RAG pipelines, vector databases, evaluation frameworks, and observability.
I walk through what I expected candidates to know in the interview, including:
• Structured outputs
• Evaluating when AI systems are wrong (evals)
• Observability and tracing prompts/token costs
• RAG architecture and document chunking
• Metadata and data freshness
• Why re-ranking dramatically improves retrieval quality
I also share what happened when we caught candidates cheating during the interview — and the biggest gaps I saw between AI hype and real AI engineering.
This experience ultimately led me to build the Parsity AI Accelerator, focused on teaching developers how to build production AI systems rather than just prompting chatbots.
If you’re trying to become an AI engineer or preparing for AI interviews, this video will show you what actually matters.
By Brian Jenney4.4
77 ratings
👉 Grab a free project to learn about RAG in a practical way. 👈
Last year, I quit an AI startup where I helped build a multimodal RAG system and the agent layer of a production web app. Before leaving, I designed the interview process to hire my replacement — and the results were surprising.
In this video I break down what companies actually expect from an AI engineer, and why the role is usually less about training models and more about integrating LLMs into real products.
Most “AI engineer” jobs are really full-stack developers building AI-powered systems — agents, workflows, RAG pipelines, vector databases, evaluation frameworks, and observability.
I walk through what I expected candidates to know in the interview, including:
• Structured outputs
• Evaluating when AI systems are wrong (evals)
• Observability and tracing prompts/token costs
• RAG architecture and document chunking
• Metadata and data freshness
• Why re-ranking dramatically improves retrieval quality
I also share what happened when we caught candidates cheating during the interview — and the biggest gaps I saw between AI hype and real AI engineering.
This experience ultimately led me to build the Parsity AI Accelerator, focused on teaching developers how to build production AI systems rather than just prompting chatbots.
If you’re trying to become an AI engineer or preparing for AI interviews, this video will show you what actually matters.

91,297 Listeners

229,674 Listeners

382 Listeners

583 Listeners

287 Listeners

27,989 Listeners

56,944 Listeners

985 Listeners

8,876 Listeners

485 Listeners

46,368 Listeners

16,982 Listeners

77 Listeners

65 Listeners