
Sign up to save your podcasts
Or
Deepgram is a Voice AI platform for enterprise use cases – speech-to-text, text-to-speech, and speech-to-speech APIs for developers.
Episode Highlights:
- Sharon shared her transition from working on automated candidate sourcing at Dover, where she first explored AI/ML in recruiting, to now leading voice-first product development at Deepgram. Her journey showcases how early AI tooling evolved into today’s voice-native applications.
- Sharon introduced Saga as a “Voice OS for developers,” a full Multimodal Conversational Program (MCP) client. It lets developers control their workflows through natural speech, connecting with multiple MCP servers and enabling Jarvis-like interactions with their systems.
- Saga enhances vibe coding by turning vague voice inputs like “build a voice AI app” into high-quality, detailed prompts for agents like Cursor. This results in more accurate and useful code, shortening the iteration loop between developer and AI assistant.
- Despite all the advancements, prompt design is still a bottleneck. Sharon likened LLMs to an intern in their first week — needing explicit, well-structured instructions to succeed. Chaining actions and mirroring tool names in prompts were some of the practical takeaways for boosting reliability.
- A major insight came from the shift in user context. Systems designed for humans (like JIRA) often lack the structured context agents need. Sharon emphasized how building for agent comprehension — not just human convenience — is key to the future of AI-native workflows.
-----------------------------------------
Connect with Sharon Yeh:
https://www.linkedin.com/in/sharonyehh/
https://deepgram.com/
https://deepgram.com/ai-apps/deepgram-saga
Connect with Demetrios:
https://www.linkedin.com/in/dpbrinkm/
Connect with Deepgram:
https://deepgram.com/
https://www.linkedin.com/company/deepgram
https://x.com/deepgramai
https://www.facebook.com/deepgram/
Join the Deepgram Discord Server!
https://discord.com/invite/xWRaCDBtW4
Deepgram is a Voice AI platform for enterprise use cases – speech-to-text, text-to-speech, and speech-to-speech APIs for developers.
Episode Highlights:
- Sharon shared her transition from working on automated candidate sourcing at Dover, where she first explored AI/ML in recruiting, to now leading voice-first product development at Deepgram. Her journey showcases how early AI tooling evolved into today’s voice-native applications.
- Sharon introduced Saga as a “Voice OS for developers,” a full Multimodal Conversational Program (MCP) client. It lets developers control their workflows through natural speech, connecting with multiple MCP servers and enabling Jarvis-like interactions with their systems.
- Saga enhances vibe coding by turning vague voice inputs like “build a voice AI app” into high-quality, detailed prompts for agents like Cursor. This results in more accurate and useful code, shortening the iteration loop between developer and AI assistant.
- Despite all the advancements, prompt design is still a bottleneck. Sharon likened LLMs to an intern in their first week — needing explicit, well-structured instructions to succeed. Chaining actions and mirroring tool names in prompts were some of the practical takeaways for boosting reliability.
- A major insight came from the shift in user context. Systems designed for humans (like JIRA) often lack the structured context agents need. Sharon emphasized how building for agent comprehension — not just human convenience — is key to the future of AI-native workflows.
-----------------------------------------
Connect with Sharon Yeh:
https://www.linkedin.com/in/sharonyehh/
https://deepgram.com/
https://deepgram.com/ai-apps/deepgram-saga
Connect with Demetrios:
https://www.linkedin.com/in/dpbrinkm/
Connect with Deepgram:
https://deepgram.com/
https://www.linkedin.com/company/deepgram
https://x.com/deepgramai
https://www.facebook.com/deepgram/
Join the Deepgram Discord Server!
https://discord.com/invite/xWRaCDBtW4