
Sign up to save your podcasts
Or
Enterprises are wrestling with delivering data to fuel their AI efforts, hitting roadblocks around data security and privacy concerns and sifting through use cases and models to put it to work. Too many are making high-stake gambles feeding vast quantities of data into massive models. Jesse Robbins, one of the founders of Chef, a progenitor of the DevOps movement, a builder of the early Internet infrastructure and now partner at Heavybit, joins host Eric Hanselman to look at alternatives to the path that many are taking in pursuit of successful AI projects. In much the same way that DevOps patterns look to shift application development to more smaller, incremental changes with a pipeline that drives continuous improvement, AI projects can work with smaller models and localized datasets to manage risk and iterate faster. It’s a pattern that avoids concerns of pushing sensitive data to cloud-based offerings by working locally. Using smaller models reduces infrastructure costs and the need for vast quantities of GPU’s.
Larger models sizes and data sets create two problems – more computational power and supporting infrastructure is required and more data complicates data provenance, security and ownership issues. Starting smaller and expecting to iterate on the results locally can have multiple benefits. If the data being used never leaves the local confines, security concerns are constrained to local environments. Tools like the open source project Ollama can deliver a choice of models to fit a variety of use cases and infrastructure capacities. Just like DevOps patterns, starting small and iterating quickly can get further faster and with lower risk.
More S&P Global Content:
Credits:
Other Resources:
4.9
2828 ratings
Enterprises are wrestling with delivering data to fuel their AI efforts, hitting roadblocks around data security and privacy concerns and sifting through use cases and models to put it to work. Too many are making high-stake gambles feeding vast quantities of data into massive models. Jesse Robbins, one of the founders of Chef, a progenitor of the DevOps movement, a builder of the early Internet infrastructure and now partner at Heavybit, joins host Eric Hanselman to look at alternatives to the path that many are taking in pursuit of successful AI projects. In much the same way that DevOps patterns look to shift application development to more smaller, incremental changes with a pipeline that drives continuous improvement, AI projects can work with smaller models and localized datasets to manage risk and iterate faster. It’s a pattern that avoids concerns of pushing sensitive data to cloud-based offerings by working locally. Using smaller models reduces infrastructure costs and the need for vast quantities of GPU’s.
Larger models sizes and data sets create two problems – more computational power and supporting infrastructure is required and more data complicates data provenance, security and ownership issues. Starting smaller and expecting to iterate on the results locally can have multiple benefits. If the data being used never leaves the local confines, security concerns are constrained to local environments. Tools like the open source project Ollama can deliver a choice of models to fit a variety of use cases and infrastructure capacities. Just like DevOps patterns, starting small and iterating quickly can get further faster and with lower risk.
More S&P Global Content:
Credits:
Other Resources:
1,632 Listeners
4,320 Listeners
385 Listeners
427 Listeners
1,012 Listeners
990 Listeners
504 Listeners
30 Listeners
31 Listeners
8 Listeners
40 Listeners
12 Listeners
219 Listeners
10 Listeners
56 Listeners
4 Listeners
56 Listeners
30 Listeners
13 Listeners
8,759 Listeners
4 Listeners
1 Listeners
6 Listeners
5 Listeners
2 Listeners
5 Listeners
72 Listeners
140 Listeners
0 Listeners
445 Listeners
5 Listeners
30 Listeners
5 Listeners