This week’s episode dives into one of the most critical questions shaping our digital future — what happens when AI stops assisting and starts acting?
As AI agents become capable of executing real actions — from shopping to scheduling, from browsing to managing — we find ourselves at a crossroads between freedom and control. Who truly holds the power when an AI acts on your behalf: you, the assistant, or the platform that hosts it?
The recent debate between Perplexity and Amazon captured this growing tension. Perplexity believes users should be able to let their AI assistants act freely using their data and accounts. Amazon, on the other hand, worries that such autonomy challenges years of trust, governance, and security carefully built within its ecosystem.
Neither side is wrong — they’re simply defining different parts of the same evolution. This isn’t just about two companies; it’s about the world’s shift toward agentic systems that blur the line between automation and authority.
In this episode, I explore why innovation must respect what platforms have built — their infrastructure, governance, and trust models — while still challenging the limits of what’s possible. History has shown us how conflict can become collaboration: when web automation raised similar fears years ago, it led to OAuth — a standard that balanced safety and innovation. AI now needs its own version of that balance.
Because real progress won’t come from breaking rules; it will come from redefining them responsibly.
Innovation should challenge limits, not disregard them. And the future of AI will depend on platforms and pioneers evolving together — with trust at the core.
Tune in to When AI Acts for You: The Thin Line Between Freedom and Control — and join me as we unpack how to keep innovation moving forward without losing the balance that makes it meaningful.