The Cloudcast

Sizing AI Workloads


Listen Later

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.

SHOW: 815

CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw

NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"

SHOW NOTES:

  • Inference.ai (homepage)
  • TechCrunch post
  • SiliconAngle post on ChatGPU

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show

Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)

Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)

Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.

Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.

Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?

Topic 6 -  What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?

FEEDBACK?

  • Email: show at the cloudcast dot net
  • Twitter: @cloudcastpod
  • Instagram: @cloudcastpod
  • TikTok: @cloudcastpod
...more
View all episodesView all episodes
Download on the App Store

The CloudcastBy Massive Studios

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

147 ratings


More shows like The Cloudcast

View all
The Changelog: Software Development, Open Source by Changelog Media

The Changelog: Software Development, Open Source

290 Listeners

The a16z Show by Andreessen Horowitz

The a16z Show

1,094 Listeners

Software Engineering Daily by Software Engineering Daily

Software Engineering Daily

622 Listeners

Talk Python To Me by Michael Kennedy

Talk Python To Me

584 Listeners

Soft Skills Engineering by Jamison Dance and Dave Smith

Soft Skills Engineering

288 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

302 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

332 Listeners

Tech Brew Ride Home by Morning Brew

Tech Brew Ride Home

961 Listeners

Practical AI by Practical AI LLC

Practical AI

205 Listeners

AWS Podcast by Amazon Web Services

AWS Podcast

204 Listeners

The Real Python Podcast by Real Python

The Real Python Podcast

141 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

501 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

228 Listeners

AI + a16z by a16z

AI + a16z

36 Listeners

The Pragmatic Engineer by Gergely Orosz

The Pragmatic Engineer

72 Listeners