
Sign up to save your podcasts
Or


When computers became commodity hardware and storage became incredibly cheap, we entered the era of so-call "big" data. Most definitions of big data will include something about not being able to process all the data on a single machine. Distributed computing is required for such large datasets.
Getting an algorithm to run on data spread out over a variety of different machines introduced new challenges for designing large-scale systems. First, there are concerns about the best strategy for spreading that data over many machines in an orderly fashion. Resolving ambiguity or disagreements across sources is sometimes required.
This episode discusses how such algorithms related to the complexity class NC.
By Kyle Polich4.4
475475 ratings
When computers became commodity hardware and storage became incredibly cheap, we entered the era of so-call "big" data. Most definitions of big data will include something about not being able to process all the data on a single machine. Distributed computing is required for such large datasets.
Getting an algorithm to run on data spread out over a variety of different machines introduced new challenges for designing large-scale systems. First, there are concerns about the best strategy for spreading that data over many machines in an orderly fashion. Resolving ambiguity or disagreements across sources is sometimes required.
This episode discusses how such algorithms related to the complexity class NC.

290 Listeners

622 Listeners

584 Listeners

302 Listeners

332 Listeners

228 Listeners

206 Listeners

203 Listeners

306 Listeners

96 Listeners

517 Listeners

261 Listeners

131 Listeners

228 Listeners

620 Listeners