UK & Ireland Director of Intelligence Enterprise at GlobalLogic, Tim Hatton, explores how principles of control theory, exemplified by SpaceX's Starship, apply to the design of effective enterprise agentic AI systems.
Reaching for the stars has always been the pinnacle of human ingenuity. The relentless desire to push beyond known boundaries is what drives innovation and advancement all around the globe. The recent example of SpaceX's latest Starship spacecraft soaring into the skies and returning with precision isn't just a milestone in aerospace engineering - it's a vivid illustration of what's possible when our boundless creativity fuels cutting-edge technologies.
SpaceX's success demonstrates that autonomous software can effectively control a sophisticated system and steer it toward defined goals. This seamless blend of autonomy, awareness, intelligent adaptability, and results-driven decision-making offers a compelling analogy for enterprises. It's a beacon for a future where agentic AI systems revolutionise workflows, drive innovation, and transform industries.
Control theory: A proven framework
Control theory underpins self-regulating systems that balance performance and adaptability. It dates from the 19th century when Scottish physicist and mathematician James Clerk Maxwell first described the operation of centrifugal 'governors'. Its core principles - feedback loops, stability, controllability, and predictability - brought humanity into the industrial age. Starting with stabilising windmill velocity, up to today's spaceflights, nuclear stations and nation-spanning electricity grids.
We see control theory in action when landing a rocket, for example. The manoeuvre relies on sensors to measure actual parameters, controllers to adjust based on feedback, and the system to execute corrections. Comparing real-time data to desired outcomes minimises errors, ensuring precision and safety.
It's a framework that extends to enterprise workflows. Employees function as systems, supervisors as controllers, and tasks as objectives. A seasoned worker might self-correct without managerial input, paralleling autonomous systems' ability to adapt dynamically.
Challenges in agentic AI
Agentic AI systems combine traditional control frameworks' precision with advanced AI models' generative power. However, while rockets rely on the time-tested principles of control theory, AI-driven systems are powered by large language models (LLMs). This introduces new layers of complexity that make designing resilient AI agents that deliver precision, adaptability, and trustworthiness uniquely challenging.
Computational irreducibility: LLMs like GPT-4 defy simplified modelling. They are so complex and their internal workings so intricate that we cannot predict their exact outputs without actually running them. Predicting outputs requires executing each computational step, complicating reliability and optimisation. A single prompt tweak can disrupt workflows, making iterative testing essential, yet time-consuming.
Nonlinearity and high dimensionality: Operating in high-dimensional vector spaces, with millions of input elements, LLMs process data in nonlinear ways. This means outputs are sensitive to minor changes. Testing and optimising the performance of single components of complex workflows, like text-to-SQL queries, under these parameters, becomes a monumental task.
Blurring code and data: Traditional systems separate code and data. In contrast, LLMs embed instructions within prompts, mixing the two. This variability introduces a host of testing, reliability, and security issues. This blurring of ever-growing data sets with the prompts introduces variability that is difficult to model and predict, which also compounds the dimensionality problem described above.
Stochastic behaviour: LLMs may produce different outputs for the same input due to factors like sampling methods during generation. This means they introduce randomness - an asset for creati...