
Sign up to save your podcasts
Or
Lia Raquel Neves,, founder of EITIC Consulting, offers a thought-provoking exploration into the ethical dimensions of artificial intelligence and its profound implications for accessibility and inclusion. Drawing from her background in philosophy and bioethics, Lia challenges the common assumption that technology is neutral, instead arguing that our creations inherently reflect our values, biases, and blind spots.
The conversation delves into crucial gaps between AI regulations and accessibility requirements. Lia points out that the European AI Act doesn't explicitly define disability as a risk factor, meaning systems that significantly impact disabled users might not be classified as high-risk. "This is not just a legal oversight," she explains, "it's an ethical failure." Without structural requirements prioritizing accessibility, technologies from virtual assistants to facial recognition systems continue to exclude people with disabilities.
When discussing data ethics, Lia confronts the uncomfortable reality of historical bias. Training AI on decades-old data inevitably reproduces historical patterns of discrimination and inequality. While diversity in datasets helps, Lia emphasizes it's insufficient alone: “We must actively detect offensive or discriminatory language and prevent models from amplifying harmful content.” She advocates for continuous human oversight, transparency, and creating mechanisms for people to challenge automated outcomes.
Perhaps most powerful is Lia's reflection on representation: "Digital accessibility is still seen as a technical requirement when it is, in fact, a matter of social justice." She notes how the invisibility of people with disabilities in media, business, and technology perpetuates exclusion, creating a cycle where decision-makers don't prioritize what they rarely encounter. True inclusion means asking who's missing from the data, who's excluded by design, and who's absent when systems are being developed.
Ready to dive deeper into creating ethical, inclusive technology? Connect with Lia on LinkedIn and join the conversation about building technology that truly serves everyone.
Support the show
Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com
Debra https://bsky.app/profile/debraruh.bsky.social
Neil https://bsky.app/profile/neilmilliken.bsky.social
axschat https://bsky.app/profile/axschat.bsky.social
LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh
5
11 ratings
Lia Raquel Neves,, founder of EITIC Consulting, offers a thought-provoking exploration into the ethical dimensions of artificial intelligence and its profound implications for accessibility and inclusion. Drawing from her background in philosophy and bioethics, Lia challenges the common assumption that technology is neutral, instead arguing that our creations inherently reflect our values, biases, and blind spots.
The conversation delves into crucial gaps between AI regulations and accessibility requirements. Lia points out that the European AI Act doesn't explicitly define disability as a risk factor, meaning systems that significantly impact disabled users might not be classified as high-risk. "This is not just a legal oversight," she explains, "it's an ethical failure." Without structural requirements prioritizing accessibility, technologies from virtual assistants to facial recognition systems continue to exclude people with disabilities.
When discussing data ethics, Lia confronts the uncomfortable reality of historical bias. Training AI on decades-old data inevitably reproduces historical patterns of discrimination and inequality. While diversity in datasets helps, Lia emphasizes it's insufficient alone: “We must actively detect offensive or discriminatory language and prevent models from amplifying harmful content.” She advocates for continuous human oversight, transparency, and creating mechanisms for people to challenge automated outcomes.
Perhaps most powerful is Lia's reflection on representation: "Digital accessibility is still seen as a technical requirement when it is, in fact, a matter of social justice." She notes how the invisibility of people with disabilities in media, business, and technology perpetuates exclusion, creating a cycle where decision-makers don't prioritize what they rarely encounter. True inclusion means asking who's missing from the data, who's excluded by design, and who's absent when systems are being developed.
Ready to dive deeper into creating ethical, inclusive technology? Connect with Lia on LinkedIn and join the conversation about building technology that truly serves everyone.
Support the show
Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com
Debra https://bsky.app/profile/debraruh.bsky.social
Neil https://bsky.app/profile/neilmilliken.bsky.social
axschat https://bsky.app/profile/axschat.bsky.social
LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh
90,721 Listeners
3,651 Listeners
111,187 Listeners
6,726 Listeners
17 Listeners
58,044 Listeners
28,419 Listeners
15,019 Listeners