
Sign up to save your podcasts
Or


Tim McAleer is a producer at Ken Burns’s Florentine Films who is responsible for the technology and processes that power their documentary production. Rather than using AI to generate creative content, Tim has built custom AI-powered tools that automate the most tedious parts of documentary filmmaking: organizing and extracting metadata from tens of thousands of archival images, videos, and audio files. In this episode, Tim demonstrates how he’s transformed post-production workflows using AI to make vast archives of historical material actually usable and searchable.
What you’ll learn:
—
Brought to you by:
Brex—The intelligent finance platform built for founders
—
Where to find Tim McAleer:
Website: https://timmcaleer.com/
LinkedIn: https://www.linkedin.com/in/timmcaleer/
—
Where to find Claire Vo:
ChatPRD: https://www.chatprd.ai/
Website: https://clairevo.com/
LinkedIn: https://www.linkedin.com/in/clairevo/
X: https://x.com/clairevo
—
In this episode, we cover:
(00:00) Introduction to Tim McAleer
(02:23) The scale of media management in documentary filmmaking
(04:16) Building a database system for archival assets
(06:02) Early experiments with AI image description
(08:59) Adding metadata extraction to improve accuracy
(12:54) Scaling from single scripts to a complete REST API
(15:16) Processing video with frame sampling and audio transcription
(19:10) Implementing vector embeddings for semantic search
(21:22) How AI frees up researchers to focus on content discovery
(24:21) Demo of “Flip Flop” iOS app for field research
(29:33) How structured file naming improves workflow efficiency
(32:20) “OCR Party” app for processing historical documents
(34:56) The versatility of different app form factors for specific workflows
(40:34) Learning approach and parallels with creative software
(42:00) Perspectives on AI in the film industry
(44:05) Prompting techniques and troubleshooting AI workflows
—
Tools referenced:
• Claude: https://claude.ai/
• ChatGPT: https://chat.openai.com/
• OpenAI Vision API: https://platform.openai.com/docs/guides/vision
• Whisper: https://github.com/openai/whisper
• Cursor: https://cursor.sh/
• Superwhisper: https://superwhisper.com/
• CLIP: https://github.com/openai/CLIP
• Gemini: https://deepmind.google/technologies/gemini/
—
Other references:
• Florentine Films: https://www.florentinefilms.com/
• Ken Burns: https://www.pbs.org/kenburns/
• Muhammad Ali documentary: https://www.pbs.org/kenburns/muhammad-ali/
• The American Revolution series: https://www.pbs.org/kenburns/the-american-revolution/
• Archival Producers Alliance: https://www.archivalproducersalliance.com/genai-guidelines
• Exif metadata standard: https://en.wikipedia.org/wiki/Exif
• Library of Congress: https://www.loc.gov/
—
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
By Claire Vo4.8
143143 ratings
Tim McAleer is a producer at Ken Burns’s Florentine Films who is responsible for the technology and processes that power their documentary production. Rather than using AI to generate creative content, Tim has built custom AI-powered tools that automate the most tedious parts of documentary filmmaking: organizing and extracting metadata from tens of thousands of archival images, videos, and audio files. In this episode, Tim demonstrates how he’s transformed post-production workflows using AI to make vast archives of historical material actually usable and searchable.
What you’ll learn:
—
Brought to you by:
Brex—The intelligent finance platform built for founders
—
Where to find Tim McAleer:
Website: https://timmcaleer.com/
LinkedIn: https://www.linkedin.com/in/timmcaleer/
—
Where to find Claire Vo:
ChatPRD: https://www.chatprd.ai/
Website: https://clairevo.com/
LinkedIn: https://www.linkedin.com/in/clairevo/
X: https://x.com/clairevo
—
In this episode, we cover:
(00:00) Introduction to Tim McAleer
(02:23) The scale of media management in documentary filmmaking
(04:16) Building a database system for archival assets
(06:02) Early experiments with AI image description
(08:59) Adding metadata extraction to improve accuracy
(12:54) Scaling from single scripts to a complete REST API
(15:16) Processing video with frame sampling and audio transcription
(19:10) Implementing vector embeddings for semantic search
(21:22) How AI frees up researchers to focus on content discovery
(24:21) Demo of “Flip Flop” iOS app for field research
(29:33) How structured file naming improves workflow efficiency
(32:20) “OCR Party” app for processing historical documents
(34:56) The versatility of different app form factors for specific workflows
(40:34) Learning approach and parallels with creative software
(42:00) Perspectives on AI in the film industry
(44:05) Prompting techniques and troubleshooting AI workflows
—
Tools referenced:
• Claude: https://claude.ai/
• ChatGPT: https://chat.openai.com/
• OpenAI Vision API: https://platform.openai.com/docs/guides/vision
• Whisper: https://github.com/openai/whisper
• Cursor: https://cursor.sh/
• Superwhisper: https://superwhisper.com/
• CLIP: https://github.com/openai/CLIP
• Gemini: https://deepmind.google/technologies/gemini/
—
Other references:
• Florentine Films: https://www.florentinefilms.com/
• Ken Burns: https://www.pbs.org/kenburns/
• Muhammad Ali documentary: https://www.pbs.org/kenburns/muhammad-ali/
• The American Revolution series: https://www.pbs.org/kenburns/the-american-revolution/
• Archival Producers Alliance: https://www.archivalproducersalliance.com/genai-guidelines
• Exif metadata standard: https://en.wikipedia.org/wiki/Exif
• Library of Congress: https://www.loc.gov/
—
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

538 Listeners

1,085 Listeners

226 Listeners

209 Listeners

146 Listeners

207 Listeners

136 Listeners

209 Listeners

595 Listeners

36 Listeners

60 Listeners

35 Listeners

38 Listeners

64 Listeners

52 Listeners