Ken Scott Baron Podcast

The Future of Artificial Intelligence


Listen Later

Some experts predict that AI will lead to significant job displacement, as machines will be able to perform many tasks that are currently done by humans. However, other experts argue that AI will create new jobs and opportunities, particularly in the fields of data analysis and machine learning.

You might not know it, but an artificial intelligence algorithm used to screen applicants has decided that you are too risky. Maybe it inferred you wouldn’t fit the company culture or you’re likely to behave in some way later on that might cause friction (such as joining a union or starting a family). Its reasoning is impossible to see and even harder to challenge.

It doesn’t matter that you practice safe digital privacy: keeping most personal details to yourself, avoiding sharing opinions online and prohibiting apps and websites from tracking you, A.I. predicts how you’ll behave at work, based on patterns it has learned from countless other people like you and me.

A.I. banks can use algorithms to decide who gets a loan, learning from past borrowers to predict who will default. Some police departments have fed years of criminal activity and arrest records into “predictive policing” algorithms that have sometimes sent officers back to patrol the same neighborhoods.

Social media platforms use our collective clicks to decide what news — or misinformation — each of us will see. In each case, we might hope that keeping our own data private could protect each of us from unwanted outcomes. A.I. only needs to know what people like you have done before.

As we adapt to living with A.I. as a larger part of our lives, we need to exert collective control over all of our data, to determine if it’s used to benefit or harm us

A while ago, protections meant people might be more willing to share their data with third parties, and these differential privacy algorithms are now quite common. Apple iPhones are built with these algorithms to collect information about user behavior and trends, without ever revealing what data came from whose phone. The 2020 U.S. census used differential privacy in its reporting on the American population to protect individuals’ personal information.

Palantir is building an A.I. system to identify and track people for deportation by combining and analyzing many data sources together getting around the obstacle posed by differential privacy.

Even without knowing who any one person is, the algorithm can likely predict the neighborhoods, workplaces and schools where undocumented immigrants are most likely to be found. A.I. algorithms called Lavender and Where’s Daddy? have been reportedly used in a similar way to help the Israeli military determine and locate targets for bombardment in Gaza.

In climate change, one person’s emissions don’t alter the atmosphere, but everyone’s emissions will destroy the planet. Your emissions matter for everyone else. Similarly, sharing one person’s data seems trivial, but sharing everyone’s data — and tasking A.I. to make decisions using it — transforms society.

Everyone sharing his or her data to train A.I. is great if we agree with the goals that were given to the A.I. It’s not so great if we don’t agree with these goals; and if the algorithm’s decisions might cost us our jobs, happiness, liberty or even lives.

We need to build institutions and pass laws that give people affected by A.I. algorithms a voice over how those algorithms are designed, and what they aim to achieve. The first step is transparency.

Use of A.I. should be required to disclose their objectives and what their algorithms are trying to maximize — whether that’s ad clicks on social media, hiring workers who won’t join unions or total deportation counts.

The second step is participation. The people whose data are used to train the algorithms — and whose lives are shaped by them — should help decide their goals. Like a jury of peers who hear a civil or criminal case and render a verdict together, we might create citizens’ assemblies where a representative randomly chosen set of people deliberates and decides on appropriate goals for algorithms.

That could mean workers at a firm deliberating about the use of A.I. at their workplace, or a civic assembly that reviews the objectives of predictive policing tools before government agencies deploy them. These are the kinds of democratic checks that could align A.I. with the public good, not just private power.

The future of A.I. will not be decided by smarter algorithms or faster chips. It will depend on who controls the data — and whose values and interests guide the machines. If we want A.I. that serves the public, the public must decide what it serves.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nabwmt.substack.com
...more
View all episodesView all episodes
Download on the App Store

Ken Scott Baron PodcastBy Ken Scott Baron