
Sign up to save your podcasts
Or
Is pre-training a thing of the past? In Episode 34 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Vagner Santana and Volkmar Uhlig to debrief this week in AI. First, OpenAI cofounder Ilya Sutskever said that “peak data” was achieved, does this mean there is no longer a need to model pre-training? Next, IBM released Granite 3.1 with a slew of features, we cover them all. Then, there is a new way to steal AI models, how do we protect against model exfiltration. Finally, can NVIDIA Jetson for AI developers really increase hardware accessibility? Tune-in for more!
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
00:01 — Intro
00:49— Is pre-training over?
10:25 — Granite 3.1
22:23 — AI model stealing
33:38—NVIDIA Jetson
5
1010 ratings
Is pre-training a thing of the past? In Episode 34 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Vagner Santana and Volkmar Uhlig to debrief this week in AI. First, OpenAI cofounder Ilya Sutskever said that “peak data” was achieved, does this mean there is no longer a need to model pre-training? Next, IBM released Granite 3.1 with a slew of features, we cover them all. Then, there is a new way to steal AI models, how do we protect against model exfiltration. Finally, can NVIDIA Jetson for AI developers really increase hardware accessibility? Tune-in for more!
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
00:01 — Intro
00:49— Is pre-training over?
10:25 — Granite 3.1
22:23 — AI model stealing
33:38—NVIDIA Jetson
1,282 Listeners
1,081 Listeners
297 Listeners
338 Listeners
222 Listeners
207 Listeners
469 Listeners
129 Listeners
209 Listeners
95 Listeners
551 Listeners
265 Listeners
33 Listeners
40 Listeners
54 Listeners