Running Node.js in serverless environments should be simple: deploy a function, let AWS scale it, and forget about infrastructure. But when you introduce multi-concurrency, shared worker threads, global state risks, and CPU-bound workloads — it’s not that simple.
In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina break down one of the biggest announcements from AWS re:Invent: the new Node.js runtime for Lambda Managed Instances. AWS is officially validating what Platformatic has been saying for months — Node.js is entering a multi-concurrency era, and most applications are not ready for it.
We’re not only deep-diving into what this means for AWS in general, but also exploring how these changes reflect on modern enterprise web workloads, going beyond the headlines to explain why AWS had to move in this direction and what it means for building, scaling, and operating Node.js applications in 2025.
We'll cover:
✅ What AWS’s new model changes — worker threads per vCPU, async/await concurrency, and 64 parallel requests per environment.
✅ How multi-concurrency exposes Node.js weaknesses — shared global state, unsafe DB clients, event-loop contention, and filesystem conflicts.
✅ Why these problems show up everywhere — not just in Lambda, but also in Kubernetes, EC2, Fargate, and on-prem deployments.
✅ How Platformatic anticipated this shift — and why Watt’s architecture (multi-worker isolation, kernel load balancing, no shared state) aligns with where AWS is steering the ecosystem.
✅ The performance implications — how concurrency amplifies latency spikes and failure cascades, and why architecture matters more than raw CPU.
AWS’s announcement isn’t just a runtime update — it’s a public acknowledgement that the old “one request, one event loop” model of Node.js is gone. If you’re running Node.js today, whether serverless or self-hosted, this episode explains what’s changing under the hood, why it matters for performance, and how to stay ahead of it.