
Sign up to save your podcasts
Or


Ethical concerns about the use of AI have to start with training data. Too often, the primary concern is simply generating sufficient data, rather than understanding its nature. Emily Jasper and Abby Simmons are back to continue the conversation started in episode 198 with host Eric Hanselman. With generative AI, the data is the application in its most formative sense. Unlike traditional application development, where the expectation is that functionality will be expanded in later releases, GenAI applications require careful design of training data before training takes place. The perspectives contained in data age rapidly and model training doesn't differentiate between outdated and current indications. Old data can effectively poison model outputs. Businesses risk alienating customers with models that are trained with data that don't properly represent them. This is particularly true with marginalized communities, where language and context can change over shorter time frames.
While there is research work on model retraining, work in AI today has to focus on effective data quality management. DeepSeek is causing a significant rethinking. Human data cleansing can be effective, but can't scale to AI demands. Data workbench tools and synthetic data approaches can help, but better automation is needed to ensure that data sets are truly representative. Data collection and data sourcing need much greater attention to ensure that model results can engage the target audience and not be a liability. It's a fundamental question of accountability that requires thinking in ways that are different than legacy development processes.
Mentioned in this episode:
More S&P Global Content:
Credits:
By S&P Global Market Intelligence4.9
2828 ratings
Ethical concerns about the use of AI have to start with training data. Too often, the primary concern is simply generating sufficient data, rather than understanding its nature. Emily Jasper and Abby Simmons are back to continue the conversation started in episode 198 with host Eric Hanselman. With generative AI, the data is the application in its most formative sense. Unlike traditional application development, where the expectation is that functionality will be expanded in later releases, GenAI applications require careful design of training data before training takes place. The perspectives contained in data age rapidly and model training doesn't differentiate between outdated and current indications. Old data can effectively poison model outputs. Businesses risk alienating customers with models that are trained with data that don't properly represent them. This is particularly true with marginalized communities, where language and context can change over shorter time frames.
While there is research work on model retraining, work in AI today has to focus on effective data quality management. DeepSeek is causing a significant rethinking. Human data cleansing can be effective, but can't scale to AI demands. Data workbench tools and synthetic data approaches can help, but better automation is needed to ensure that data sets are truly representative. Data collection and data sourcing need much greater attention to ensure that model results can engage the target audience and not be a liability. It's a fundamental question of accountability that requires thinking in ways that are different than legacy development processes.
Mentioned in this episode:
More S&P Global Content:
Credits:

1,993 Listeners

2,672 Listeners

1,649 Listeners

1,105 Listeners

154 Listeners

6 Listeners

1,448 Listeners

41 Listeners

9 Listeners

6 Listeners

684 Listeners

232 Listeners

28 Listeners

28 Listeners

9 Listeners

4 Listeners

61 Listeners

28 Listeners

11 Listeners

10,254 Listeners

4 Listeners

5,576 Listeners

214 Listeners

1 Listeners

69 Listeners

194 Listeners

146 Listeners

6 Listeners

3 Listeners

0 Listeners

7 Listeners

5 Listeners

5 Listeners

61 Listeners