
Sign up to save your podcasts
Or


On a late hospital shift, an exhausted nurse looked up and locked eyes with someone she never expected to see—a former patient with a history of aggression. He wasn’t supposed to be there. A restraining order was in place. But security had no idea.
This isn’t about surveillance—it’s about prevention. AI for good means identifying real threats before harm happens. "When you think about security, AI, biometrics, and behavior detection, a lot of red flags may go up," said Ben Thomas, host of Pro AV Today. A longtime technology analyst, Ben has seen how misinformation distorts public perception. "And a lot of times, those red flags go up because there's a lot of miseducation out there."
In our latest episode of Innovation Obsessed, Ben Thomas and Shawn Fontenot, VP of Global Marketing at Oosto, tackle the biggest misconceptions about AI security. They explore how misinformation stalls progress and why AI, when done right, is a force for good—protecting people before it’s too late.
Responsible AI Considerations for SecurityResponsible AI narrows the focus to specific known threats.
"We’re not looking for everyone," said Shawn. "In fact, when we start with a customer, we start with an empty database. We don’t scrape data from the internet. We don’t track people’s movements. We don’t even know who you are unless you’re on a security watchlist—like a known bad actor, a banned individual, or someone with a restraining order."
With Oosto, customers populate and control watchlists and create rules to uncover safety and security threats. The system completely ignores individuals not on those watchlists.
Real-World Example: AI in Highly Regulated IndustriesCasinos, some of the most heavily regulated security environments in the world, rely on facial recognition for targeted threat detection and for identifying self-excluders: people who want help in breaking their addiction to gambling.
"Casinos aren’t just sending out a squad of people when an alert pops up," explained Shawn. "They go through multiple layers of verification before taking action. It’s about identifying repeat offenders, fraudsters, and those banned from the premises—not tracking guests."
This model is used in airports, hospitals, and critical infrastructure where security matters most. Instead of relying on human guards to remember hundreds or thousands of faces—a process prone to error—AI helps ensure only pre-identified threats are flagged.
Addressing Bias and AccuracyAI is only as good as the data that built it, and bias arises when algorithms are trained on limited or unrepresentative datasets.
"Some applications of facial recognition have given the industry a bad name," said Shawn. "But the truth is, most people we’re looking for aren’t standing still, looking at a camera. They’re in motion, in crowds, in bad lighting. Our AI is trained for those conditions—not just perfect, static images."
In this episode, Ben and Shawn discuss what makes ethical AI security different:
"Our models are constantly being refined and tested in the real world, which is why we continue to meet and exceed regulatory standards," Shawn explained.
Privacy Considerations"I always tell folks, you watch CSI and they keep saying 'enhance, enhance' until they zoom in all the way to a credit card number. That’s just not how it works," Ben said with a laugh.
A major misconception about AI security is that it collects, stores, and tracks personal data. Ethical AI systems do not operate that way.
"We don’t even store personal data. When a face is scanned, it’s converted into a mathematical vector—there’s nothing to steal," explained Shawn.
They discuss how responsible AI security follows strict privacy-first principles:
One of the most practical applications of AI security today is in hospitals, where staff—including nurses and doctors—face growing threats of violence.
"If you’re a nurse, and there’s someone you have a restraining order against, you shouldn’t have to worry about whether security will recognize them at the door," said Shawn.
Instead of relying on human memory and manual monitoring, AI can alert security the moment a flagged individual enters the facility—giving staff time to take preventative action before an incident occurs.
AI Security: A Force for GoodThe conversation around AI security shouldn’t be framed as a battle between privacy and protection—it should be about how we use technology to create safer environments while upholding ethical standards.
As Shawn focuses on in the episode, Oosto’s mission is to harness Vision AI for Good—a commitment to using artificial intelligence responsibly to protect people, businesses, and communities without compromising privacy.
Ethical AI security is about proactive protection, not surveillance. It’s about ensuring healthcare workers, retail employees, university students, corporate staff, and the public are safeguarded from known threats—without indiscriminate monitoring or data misuse.
By prioritizing transparency, responsible deployment, and privacy-first design, AI security becomes an essential tool for preventing harm before it happens.
With the right approach, security and privacy don’t have to be at odds. We can, and must, have both.
By OostoOn a late hospital shift, an exhausted nurse looked up and locked eyes with someone she never expected to see—a former patient with a history of aggression. He wasn’t supposed to be there. A restraining order was in place. But security had no idea.
This isn’t about surveillance—it’s about prevention. AI for good means identifying real threats before harm happens. "When you think about security, AI, biometrics, and behavior detection, a lot of red flags may go up," said Ben Thomas, host of Pro AV Today. A longtime technology analyst, Ben has seen how misinformation distorts public perception. "And a lot of times, those red flags go up because there's a lot of miseducation out there."
In our latest episode of Innovation Obsessed, Ben Thomas and Shawn Fontenot, VP of Global Marketing at Oosto, tackle the biggest misconceptions about AI security. They explore how misinformation stalls progress and why AI, when done right, is a force for good—protecting people before it’s too late.
Responsible AI Considerations for SecurityResponsible AI narrows the focus to specific known threats.
"We’re not looking for everyone," said Shawn. "In fact, when we start with a customer, we start with an empty database. We don’t scrape data from the internet. We don’t track people’s movements. We don’t even know who you are unless you’re on a security watchlist—like a known bad actor, a banned individual, or someone with a restraining order."
With Oosto, customers populate and control watchlists and create rules to uncover safety and security threats. The system completely ignores individuals not on those watchlists.
Real-World Example: AI in Highly Regulated IndustriesCasinos, some of the most heavily regulated security environments in the world, rely on facial recognition for targeted threat detection and for identifying self-excluders: people who want help in breaking their addiction to gambling.
"Casinos aren’t just sending out a squad of people when an alert pops up," explained Shawn. "They go through multiple layers of verification before taking action. It’s about identifying repeat offenders, fraudsters, and those banned from the premises—not tracking guests."
This model is used in airports, hospitals, and critical infrastructure where security matters most. Instead of relying on human guards to remember hundreds or thousands of faces—a process prone to error—AI helps ensure only pre-identified threats are flagged.
Addressing Bias and AccuracyAI is only as good as the data that built it, and bias arises when algorithms are trained on limited or unrepresentative datasets.
"Some applications of facial recognition have given the industry a bad name," said Shawn. "But the truth is, most people we’re looking for aren’t standing still, looking at a camera. They’re in motion, in crowds, in bad lighting. Our AI is trained for those conditions—not just perfect, static images."
In this episode, Ben and Shawn discuss what makes ethical AI security different:
"Our models are constantly being refined and tested in the real world, which is why we continue to meet and exceed regulatory standards," Shawn explained.
Privacy Considerations"I always tell folks, you watch CSI and they keep saying 'enhance, enhance' until they zoom in all the way to a credit card number. That’s just not how it works," Ben said with a laugh.
A major misconception about AI security is that it collects, stores, and tracks personal data. Ethical AI systems do not operate that way.
"We don’t even store personal data. When a face is scanned, it’s converted into a mathematical vector—there’s nothing to steal," explained Shawn.
They discuss how responsible AI security follows strict privacy-first principles:
One of the most practical applications of AI security today is in hospitals, where staff—including nurses and doctors—face growing threats of violence.
"If you’re a nurse, and there’s someone you have a restraining order against, you shouldn’t have to worry about whether security will recognize them at the door," said Shawn.
Instead of relying on human memory and manual monitoring, AI can alert security the moment a flagged individual enters the facility—giving staff time to take preventative action before an incident occurs.
AI Security: A Force for GoodThe conversation around AI security shouldn’t be framed as a battle between privacy and protection—it should be about how we use technology to create safer environments while upholding ethical standards.
As Shawn focuses on in the episode, Oosto’s mission is to harness Vision AI for Good—a commitment to using artificial intelligence responsibly to protect people, businesses, and communities without compromising privacy.
Ethical AI security is about proactive protection, not surveillance. It’s about ensuring healthcare workers, retail employees, university students, corporate staff, and the public are safeguarded from known threats—without indiscriminate monitoring or data misuse.
By prioritizing transparency, responsible deployment, and privacy-first design, AI security becomes an essential tool for preventing harm before it happens.
With the right approach, security and privacy don’t have to be at odds. We can, and must, have both.