
Sign up to save your podcasts
Or
In this episode of the Voices from DARPA podcast, Bruce Draper, a program manager since 2019 in the agency’s Information Innovation Office, explains how his fascination with the ways people reason, think, and believe what they believe steered him into a lifelong embrace of computer science and artificial intelligence (AI) research. At DARPA, Draper—who says he welcomes working at a place where an academic scientist like himself can influence the direction of entire fields of research—oversees a portfolio of programs that collectively are about making artificial intelligence learn faster, less prone to mistakes and flawed inferences, and less vulnerable to misuse and deception. One of his programs aims to imbue computers with nonverbal communication abilities so that AIs collaborating with people can integrate a human being’s facial and gestural cues with written and oral ones. Another program seeks to make machine-learning algorithms into quicker studies that require simpler data sets to learn how to identify objects, actions, and other categories of phenomena. Two of Draper’s programs fall into the category of “adversarial AI,” in which, for example, those with ill intent might try to deceive an AI with “poisoned data” that could lead to inappropriate inferences and actions. Yet another program, a new one, aims to develop AIs that can serve as competent guides for people in the midst of tasks, say, fixing the brakes on a military aircraft or preparing tiramisu for a dinner party. “It’s sort of the do-it-yourself revolution on steroids,” says Draper. AI holds exciting possibilities, he adds, but it will take close attention to privacy concerns, built-in biases, and other hidden perils for AI to become the technology we want it to be for us all.
4.8
108108 ratings
In this episode of the Voices from DARPA podcast, Bruce Draper, a program manager since 2019 in the agency’s Information Innovation Office, explains how his fascination with the ways people reason, think, and believe what they believe steered him into a lifelong embrace of computer science and artificial intelligence (AI) research. At DARPA, Draper—who says he welcomes working at a place where an academic scientist like himself can influence the direction of entire fields of research—oversees a portfolio of programs that collectively are about making artificial intelligence learn faster, less prone to mistakes and flawed inferences, and less vulnerable to misuse and deception. One of his programs aims to imbue computers with nonverbal communication abilities so that AIs collaborating with people can integrate a human being’s facial and gestural cues with written and oral ones. Another program seeks to make machine-learning algorithms into quicker studies that require simpler data sets to learn how to identify objects, actions, and other categories of phenomena. Two of Draper’s programs fall into the category of “adversarial AI,” in which, for example, those with ill intent might try to deceive an AI with “poisoned data” that could lead to inappropriate inferences and actions. Yet another program, a new one, aims to develop AIs that can serve as competent guides for people in the midst of tasks, say, fixing the brakes on a military aircraft or preparing tiramisu for a dinner party. “It’s sort of the do-it-yourself revolution on steroids,” says Draper. AI holds exciting possibilities, he adds, but it will take close attention to privacy concerns, built-in biases, and other hidden perils for AI to become the technology we want it to be for us all.
6,329 Listeners
14,209 Listeners
32,150 Listeners
3,428 Listeners
21,947 Listeners
1,080 Listeners
43,819 Listeners
33,494 Listeners
142 Listeners
212 Listeners
5,103 Listeners
16,072 Listeners
139 Listeners
16,007 Listeners
186 Listeners