
Sign up to save your podcasts
Or
The use of machine learning (ML) in high-stakes societal decisions has encouraged the consideration of fairness throughout the ML lifecycle. Although data integration is one of the primary steps to generate high-quality training data, most of the fairness literature ignores this stage. In this interview Sainyam discusses why he focuses on fairness in the integration component of data management, aiming to identify features that improve prediction without adding any bias to the dataset. Sainyam works under the causal fairness paradigm and without requiring the underlying structural causal model a priori, we has developed an approach to identify a sub-collection of features that ensure fairness of the dataset by performing conditional independence tests between different subsets of features.
0:35: Can you introduce your work and describe the problem you're aiming to solve?
2:39: Can you elaborate on what fairness mean?
3:51: Lets dig into your solution, how does the causal approach work?
4:41: How does your approach compare to other approach into your evaluations?
6:17: How can data scientists apply your findings to the real world?
7:54: What was the most unexpected challenge you faced while working on algorithmic fairness?
8:29: What is next for your research?
9:17: Tell us about your other publications at SIGMOD?
10:57: How can the research get involved in algorithmic fairness?
Hosted on Acast. See acast.com/privacy for more information.
5
66 ratings
The use of machine learning (ML) in high-stakes societal decisions has encouraged the consideration of fairness throughout the ML lifecycle. Although data integration is one of the primary steps to generate high-quality training data, most of the fairness literature ignores this stage. In this interview Sainyam discusses why he focuses on fairness in the integration component of data management, aiming to identify features that improve prediction without adding any bias to the dataset. Sainyam works under the causal fairness paradigm and without requiring the underlying structural causal model a priori, we has developed an approach to identify a sub-collection of features that ensure fairness of the dataset by performing conditional independence tests between different subsets of features.
0:35: Can you introduce your work and describe the problem you're aiming to solve?
2:39: Can you elaborate on what fairness mean?
3:51: Lets dig into your solution, how does the causal approach work?
4:41: How does your approach compare to other approach into your evaluations?
6:17: How can data scientists apply your findings to the real world?
7:54: What was the most unexpected challenge you faced while working on algorithmic fairness?
8:29: What is next for your research?
9:17: Tell us about your other publications at SIGMOD?
10:57: How can the research get involved in algorithmic fairness?
Hosted on Acast. See acast.com/privacy for more information.
284 Listeners
621 Listeners
111,864 Listeners
47 Listeners
28 Listeners
18 Listeners
491 Listeners