
Sign up to save your podcasts
Or


Spoken (by a human) version of this article.
When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:
What are the personal characteristics or attributes that, if used, could lead to discrimination?
This article provides a basic framework for identifying and categorising these attributes.
To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe
About this podcast
A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.
Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).
By Risk Insights: Yusuf MoollaSpoken (by a human) version of this article.
When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:
What are the personal characteristics or attributes that, if used, could lead to discrimination?
This article provides a basic framework for identifying and categorising these attributes.
To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe
About this podcast
A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.
Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).