
Sign up to save your podcasts
Or
Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.
Show highlights
• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability
• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions
• LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components
• Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")
• Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development
• Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control
• The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior
• Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards
Explore Dr. Zargham's work
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
5
99 ratings
Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.
Show highlights
• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability
• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions
• LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components
• Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")
• Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development
• Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control
• The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior
• Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards
Explore Dr. Zargham's work
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
111,157 Listeners
187 Listeners
8,761 Listeners