InTechnology

WTM: Artificial Intelligence


Listen Later

In our first ever episode of What That Means — the Cliff’s Notes companion to the Cybersecurity Inside podcast — Camille is tasking Rita Wouhaybi, principal engineer for industrial solutions in the IoT group at Intel, with defining artificial intelligence in under three minutes. 

 

(Spoiler: she nails it.)

Plus, Camille and Rita cover: 

 

- The Turing Test + how we measure intelligence in a computer or machine
- Explainable AI/Biases in learning
- The questions we should be asking as consumers and/or implementors of AI
- Deciding what AI techniques to use and what to use them for
- The confidence levels of AI
- The one thing to keep in mind about AI
- Why AI is not going to solve all our problems
- What AI competition is doing for the industry

Check it out!

Here are some key take-aways

  • If you feed AI bias, it’s going to spit out bias. 
  • AI is not definitive. Every answer that AI gives you is going to have a confidence level.
  • AI is not going to solve all your problems. So pick the problem that makes the most sense.


Some interesting quotes from today’s episode:

 

“It’s based on some cognitive ideas, where you see information, or actually you see more like data, raw data, and you distill information out of it. And as humans, as well as animals, we do that all the time. So it’s the idea of creating a computer program that is capable of doing it.”

“I would even argue that to a large extent, when you have a child growing in a biased environment, that child will be biased as a child. And it’s going to take them to go out of that environment and expand their horizon — either through reading or experiencing other individuals — to widen that scope and get rid of that bias and reexamine it. And I think that could happen in AI, too.” 

 

AI is never 100% sure. The trick is, where is your tolerance? Do you want AI to make sure that if it sees something bad, to tell you about it, with the assumption that some of those might actually be good? Or the opposite? Which one matters more? So, if you are a medical doctor, would you rather have an AI that says, ‘Oh, I think this one has lung cancer’ higher and ask for further testing, or miss a few lung cancer diagnoses? Where do you want that error to wiggle? Do you want it to wiggle on crying wolf? Or do you want it to be very conservative and miss some diagnoses? Those are very important questions.”

...more
View all episodesView all episodes
Download on the App Store

InTechnologyBy Camille Morhardt

  • 5
  • 5
  • 5
  • 5
  • 5

5

7 ratings