In this episode of Tech Threads, Nandan Nayampally, Baya Systems CCO, sits down with Ian Ferguson, Vice President of Vertical Markets and Business Development at SiFive, to unpack one of the most important shifts happening in modern computing: AI is no longer just about scaling compute, it’s about orchestrating complexity.
As architectures fragment across accelerators, chiplets, and custom silicon, the real challenge is no longer building faster chips. it’s turning all of these elements into a cohesive, high-performance system.
This conversation explores why the industry is moving beyond the traditional “CPU vs GPU” narrative and toward a system-level approach where performance is defined by how effectively compute, memory, interconnect and software work together.
From the growing momentum behind RISC-V to the rise of heterogeneous compute environments, the discussion highlights a clear trend: the future won’t be defined by a single dominant architecture, but by optimized combinations of technologies tailored to specific workloads.
That shift introduces a new layer of complexity.
Key themes explored in this episode include:
- Why data movement is emerging as the primary constraint in AI systems
- How efficiency metrics like “tokens per dollar” are reshaping design priorities
- The shift toward purpose-built architectures across data center, automotive, and edge applications
- The role of open ecosystems and interoperability in accelerating innovation
- Why competitive advantage is shifting from individual components to full system design
If you’re interested in where AI is headed, this is a must-watch conversation on the forces shaping the future of compute and what it takes to stay ahead.