
Sign up to save your podcasts
Or


Inferact is a new AI infrastructure company founded by the creators and core maintainers of vLLM. Its mission is to build a universal, open-source inference layer that makes large AI models faster, cheaper, and more reliable to run across any hardware, model architecture, or deployment environment. Together, they broke down how modern AI models are actually run in production, why “inference” has quietly become one of the hardest problems in AI infrastructure, and how the open-source project vLLM emerged to solve it. The conversation also looked at why the vLLM team started Inferact and their vision for a universal inference layer that can run any model, on any chip, efficiently.
Follow Matt Bornstein on X: https://twitter.com/BornsteinMatt
Follow Simon Mo on X: https://twitter.com/simon_mo_
Follow Woosuk Kwon on X: https://twitter.com/woosuk_k
Follow vLLM on X: https://twitter.com/vllm_project
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By Andreessen Horowitz4.3
10121,012 ratings
Inferact is a new AI infrastructure company founded by the creators and core maintainers of vLLM. Its mission is to build a universal, open-source inference layer that makes large AI models faster, cheaper, and more reliable to run across any hardware, model architecture, or deployment environment. Together, they broke down how modern AI models are actually run in production, why “inference” has quietly become one of the hardest problems in AI infrastructure, and how the open-source project vLLM emerged to solve it. The conversation also looked at why the vLLM team started Inferact and their vision for a universal inference layer that can run any model, on any chip, efficiently.
Follow Matt Bornstein on X: https://twitter.com/BornsteinMatt
Follow Simon Mo on X: https://twitter.com/simon_mo_
Follow Woosuk Kwon on X: https://twitter.com/woosuk_k
Follow vLLM on X: https://twitter.com/vllm_project
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

1,287 Listeners

533 Listeners

2,339 Listeners

2,185 Listeners

3,992 Listeners

227 Listeners

104 Listeners

10,222 Listeners

535 Listeners

152 Listeners

25 Listeners

289 Listeners

60 Listeners

140 Listeners

99 Listeners

474 Listeners

33 Listeners

41 Listeners

52 Listeners