
Sign up to save your podcasts
Or
Enterprises are wrestling with delivering data to fuel their AI efforts, hitting roadblocks around data security and privacy concerns and sifting through use cases and models to put it to work. Too many are making high-stake gambles feeding vast quantities of data into massive models. Jesse Robbins, one of the founders of Chef, a progenitor of the DevOps movement, a builder of the early Internet infrastructure and now partner at Heavybit, joins host Eric Hanselman to look at alternatives to the path that many are taking in pursuit of successful AI projects. In much the same way that DevOps patterns look to shift application development to more smaller, incremental changes with a pipeline that drives continuous improvement, AI projects can work with smaller models and localized datasets to manage risk and iterate faster. It’s a pattern that avoids concerns of pushing sensitive data to cloud-based offerings by working locally. Using smaller models reduces infrastructure costs and the need for vast quantities of GPU’s.
Larger models sizes and data sets create two problems – more computational power and supporting infrastructure is required and more data complicates data provenance, security and ownership issues. Starting smaller and expecting to iterate on the results locally can have multiple benefits. If the data being used never leaves the local confines, security concerns are constrained to local environments. Tools like the open source project Ollama can deliver a choice of models to fit a variety of use cases and infrastructure capacities. Just like DevOps patterns, starting small and iterating quickly can get further faster and with lower risk.
More S&P Global Content:
Credits:
Other Resources:
4.9
2828 ratings
Enterprises are wrestling with delivering data to fuel their AI efforts, hitting roadblocks around data security and privacy concerns and sifting through use cases and models to put it to work. Too many are making high-stake gambles feeding vast quantities of data into massive models. Jesse Robbins, one of the founders of Chef, a progenitor of the DevOps movement, a builder of the early Internet infrastructure and now partner at Heavybit, joins host Eric Hanselman to look at alternatives to the path that many are taking in pursuit of successful AI projects. In much the same way that DevOps patterns look to shift application development to more smaller, incremental changes with a pipeline that drives continuous improvement, AI projects can work with smaller models and localized datasets to manage risk and iterate faster. It’s a pattern that avoids concerns of pushing sensitive data to cloud-based offerings by working locally. Using smaller models reduces infrastructure costs and the need for vast quantities of GPU’s.
Larger models sizes and data sets create two problems – more computational power and supporting infrastructure is required and more data complicates data provenance, security and ownership issues. Starting smaller and expecting to iterate on the results locally can have multiple benefits. If the data being used never leaves the local confines, security concerns are constrained to local environments. Tools like the open source project Ollama can deliver a choice of models to fit a variety of use cases and infrastructure capacities. Just like DevOps patterns, starting small and iterating quickly can get further faster and with lower risk.
More S&P Global Content:
Credits:
Other Resources:
1,637 Listeners
4,334 Listeners
380 Listeners
435 Listeners
1,059 Listeners
980 Listeners
522 Listeners
28 Listeners
28 Listeners
6 Listeners
40 Listeners
10 Listeners
223 Listeners
10 Listeners
63 Listeners
4 Listeners
60 Listeners
28 Listeners
11 Listeners
9,269 Listeners
4 Listeners
1 Listeners
5 Listeners
5 Listeners
2 Listeners
5 Listeners
73 Listeners
162 Listeners
0 Listeners
468 Listeners
5 Listeners
32 Listeners
5 Listeners