
Sign up to save your podcasts
Or


Is pre-training a thing of the past? In Episode 34 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Vagner Santana and Volkmar Uhlig to debrief this week in AI. First, OpenAI cofounder Ilya Sutskever said that “peak data” was achieved, does this mean there is no longer a need to model pre-training? Next, IBM released Granite 3.1 with a slew of features, we cover them all. Then, there is a new way to steal AI models, how do we protect against model exfiltration. Finally, can NVIDIA Jetson for AI developers really increase hardware accessibility? Tune-in for more!
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
00:01 — Intro
00:49— Is pre-training over?
10:25 — Granite 3.1
22:23 — AI model stealing
33:38—NVIDIA Jetson
By IBM5
1111 ratings
Is pre-training a thing of the past? In Episode 34 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Vagner Santana and Volkmar Uhlig to debrief this week in AI. First, OpenAI cofounder Ilya Sutskever said that “peak data” was achieved, does this mean there is no longer a need to model pre-training? Next, IBM released Granite 3.1 with a slew of features, we cover them all. Then, there is a new way to steal AI models, how do we protect against model exfiltration. Finally, can NVIDIA Jetson for AI developers really increase hardware accessibility? Tune-in for more!
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
00:01 — Intro
00:49— Is pre-training over?
10:25 — Granite 3.1
22:23 — AI model stealing
33:38—NVIDIA Jetson

1,093 Listeners

302 Listeners

332 Listeners

228 Listeners

961 Listeners

205 Listeners

205 Listeners

306 Listeners

516 Listeners

501 Listeners

5,506 Listeners

130 Listeners

624 Listeners

36 Listeners

52 Listeners