Share Lock and Code
Share to email
Share to Facebook
Share to X
By Malwarebytes
4.8
3434 ratings
The podcast currently has 121 episodes available.
The month, a consumer rights group out of the UK posed a question to the public that they’d likely never considered: Were their air fryers spying on them?
By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.
“In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason,” the group wrote in its findings.
While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn’t connect to the internet, request your data, or share that data with unknown companies and contractors across the world.
Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.
These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
The US presidential election is upon the American public, and with it come fears of “election interference.”
But “election interference” is a broad term. It can mean the now-regular and expected foreign disinformation campaigns that are launched to sow political discord or to erode trust in American democracy. It can include domestic campaigns to disenfranchise voters in battleground states. And it can include the upsetting and increasing threats made to election officials and volunteers across the country.
But there’s an even broader category of election interference that is of particular importance to this podcast, and that’s cybersecurity.
Elections in the United States rely on a dizzying number of technologies. There are the voting machines themselves, there are electronic pollbooks that check voters in, there are optical scanners that tabulate the votes that the American public actually make when filling in an oval bubble with pen, or connecting an arrow with a solid line. And none of that is to mention the infrastructure that campaigns rely on every day to get information out—across websites, through emails, in text messages, and more.
That interlocking complexity is only multiplied when you remember that each, individual state has its own way of complying with the Federal government’s rules and standards for running an election. As Cait Conley, Senior Advisor to the Director of the US Cybersecurity and Infrastructure Security Agency (CISA) explains in today’s episode:
“There’s a common saying in the election space: If you’ve seen one state’s election, you’ve seen one state’s election.”
How, then, are elections secured in the United States, and what threats does CISA defend against?
Today, on the Lock and Code podcast with host David Ruiz, we speak with Conley about how CISA prepares and trains election officials and volunteers before the big day, whether or not an American’s vote can be “hacked,” and what the country is facing in the final days before an election, particularly from foreign adversaries that want to destabilize American trust.
”There’s a pretty good chance that you’re going to see Russia, Iran, or China try to claim that a distributed denial of service attack or a ransomware attack against a county is somehow going to impact the security or integrity of your vote. And it’s not true.”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.
This is because of the largely unchecked “data broker” industry.
Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.
Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.
This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.
So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.
“We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Online scammers were seen this August stooping to a new low—abusing local funerals to steal from bereaved family and friends.
Cybercrime has never been a job of morals (calling it a “job” is already lending it too much credit), but, for many years, scams wavered between clever and brusque. Take the “Nigerian prince” email scam which has plagued victims for close to two decades. In it, would-be victims would receive a mysterious, unwanted message from alleged royalty, and, in exchange for a little help in moving funds across international borders, would be handsomely rewarded.
The scam was preposterous but effective—in fact, in 2019, CNBC reported that this very same “Nigerian prince” scam campaign resulted in $700,000 in losses for victims in the United States.
Since then, scams have evolved dramatically.
Cybercriminals today willl send deceptive emails claiming to come from Netflix, or Google, or Uber, tricking victims into “resetting” their passwords. Cybercriminals will leverage global crises, like the COVID-19 pandemic, and send fraudulent requests for donations to nonprofits and hospital funds. And, time and again, cybercriminals will find a way to play on our emotions—be they fear, or urgency, or even affection—to lure us into unsafe places online.
This summer, Malwarebytes social media manager Zach Hinkle encountered one such scam, and it happened while attending a funeral for a friend. In a campaign that Malwarebytes Labs is calling the “Facebook funeral live stream scam,” attendees at real funerals are being tricked into potentially signing up for a “live stream” service of the funerals they just attended.
Today on the Lock and Code podcast with host David Ruiz, we speak with Hinkle and Malwarebytes security researcher Pieter Arntz about the Facebook funeral live stream scam, what potential victims have to watch out for, and how cybercriminals are targeting actual, grieving family members with such foul deceit. Hinkle also describes what he felt in the moment of trying to not only take the scam down, but to protect his friends from falling for it.
“You’re grieving… and you go through a service and you’re feeling all these emotions, and then the emotion you feel is anger because someone is trying to take advantage of friends and loved ones, of somebody who has just died. That’s so appalling”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
On August 15, the city of San Francisco launched an entirely new fight against the world of deepfake porn—it sued the websites that make the abusive material so easy to create.
“Deepfakes,” as they’re often called, are fake images and videos that utilize artificial intelligence to swap the face of one person onto the body of another. The technology went viral in the late 2010s, as independent film editors would swap the actors of one film for another—replacing, say, Michael J. Fox in Back to the Future with Tom Holland.
But very soon into the technology’s debut, it began being used to create pornographic images of actresses, celebrities, and, more recently, everyday high schoolers and college students. Similar to the threat of “revenge porn,” in which abusive exes extort their past partners with the potential release of sexually explicit photos and videos, “deepfake porn” is sometimes used to tarnish someone’s reputation or to embarrass them amongst friends and family.
But deepfake porn is slightly different from the traditional understanding of “revenge porn” in that it can be created without any real relationship to the victim. Entire groups of strangers can take the image of one person and put it onto the body of a sex worker, or an adult film star, or another person who was filmed having sex or posing nude.
The technology to create deepfake porn is more accessible than ever, and it’s led to a global crisis for teenage girls.
In October of 2023, a reported group of more than 30 girls at a high school in New Jersey had their likenesses used by classmates to make sexually explicit and pornographic deepfakes. In March of this year, two teenage boys were arrested in Miami, Florida for allegedly creating deepfake nudes of male and female classmates who were between the ages of 12 and 13. And at the start of September, this month, the BBC reported that police in South Korea were investigating deepfake pornography rings at two major universities.
While individual schools and local police departments in the United States are tackling deepfake porn harassment as it arises—with suspensions, expulsions, and arrests—the process is slow and reactive.
Which is partly why San Francisco City Attorney David Chiu and his team took aim at not the individuals who create and spread deepfake porn, but at the websites that make it so easy to do so.
Today, on the Lock and Code podcast with host David Ruiz, we speak with San Francisco City Attorney David Chiu about his team’s lawsuit against 16 deepfake porn websites, the city’s history in protecting Californians, and the severity of abuse that these websites offer as a paid service.
“At least one of these websites specifically promotes the non-consensual nature of this. I’ll just quote: ‘Imagine wasting time taking her out on dates when you can just use website X to get her nudes.’”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
On August 24, at an airport just outside of Paris, a man named Pavel Durov was detained for questioning by French investigators. Just days later, the same man was charged in crimes related to the distribution of child pornography and illicit transactions, such as drug trafficking and fraud.
Durov is the CEO and founder of the messaging and communications app Telegram. Though Durov holds citizenship in France and the United Arab Emirates—where Telegram is based—he was born and lived for many years in Russia, where he started his first social media company, Vkontakte. The Facebook-esque platform gained popularity in Russia, not just amongst users, but also the watchful eye of the government.
Following a prolonged battle regarding the control of Vkontake—which included government demands to deliver user information and to shut down accounts that helped organize protests against Vladimir Putin in 2012—Durov eventually left the company and the country all together.
But more than 10 years later, Durov is once again finding himself a person of interest for government affairs, facing several charges now in France where, while he is not in jail, he has been ordered to stay.
After Durov’s arrest, the X account for Telegram responded, saying:
“Telegram abides by EU laws, including the Digital Services Act—its moderation is within industry standards and constantly improving. Telegram’s CEO Pavel Durov has nothing to hide and travels frequently in Europe. It is absurd to claim that a platform or its owner are responsible for abuse of the platform.”
But how true is that?
In the United States, companies themselves, such as YouTube, X (formerly Twitter), and Facebook often respond to violations of “copyright”—the protection that gets violated when a random user posts clips or full versions of movies, television shows, and music. And the same companies get involved when certain types of harassment, hate speech, and violent threats are posted on public channels for users to see.
This work, called “content moderation,” is standard practice for many technology and social media platforms today, but there’s a chance that Durov’s arrest isn’t related to content moderation at all. Instead, it may be related to the things that Telegram users say in private to one another over end-to-end encrypted chats.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Electronic Frontier Foundation Director of Cybersecurity Eva Galperin about Telegram, its features, and whether Durov’s arrest is an escalation of content moderation gone wrong or the latest skirmish in government efforts to break end-to-end encryption.
“Chances are that these are requests around content that Telegram can see, but if [the requests] touch end-to-end encrypted content, then I have to flip tables.”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Every age group uses the internet a little bit differently, and it turns out for at least one Gen Z teen in the Bay Area, the classic approach to cyberecurity—defending against viruses, ransomware, worms, and more—is the least of her concerns. Of far more importance is Artificial Intelligence (AI).
Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what teenagers fear the most about going online. The conversation is a strong reminder that when America’s youngest generations experience online is far from the same experience that Millennials, Gen X’ers, and Baby Boomers had with their own introduction to the internet.
Even stronger proof of this is found in recent research that Malwarebytes debuted this summer about how people in committed relationships share their locations, passwords, and devices with one another. As detailed in the larger report, “What’s mine is yours: How couples share an all-access pass to their digital lives,” Gen Z respondents were the most likely to say that they got a feeling of safety when sharing their locations with significant others.
But a wrinkle appeared in that behavior, according to the same research: Gen Z was also the most likely to say that they only shared their locations because their partners forced them to do so.
In our full conversation from last year, we speak with Nitya Sharma about how her “favorite app” to use with friends is “Find My” on iPhone, the dangers are of AI “sneak attacks,” and why she simply cannot be bothered about malware.
“I know that there’s a threat of sharing information with bad people and then abusing it, but I just don’t know what you would do with it. Show up to my house and try to kill me?”Tune in today to listen to the full conversation.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Somewhere out there is a romantic AI chatbot that wants to know everything about you. But in a revealing overlap, other AI tools—which are developed and popularized by far larger companies in technology—could crave the very same thing.
For AI tools of any type, our data is key.
In the nearly two years since OpenAI unveiled ChatGPT to the public, the biggest names in technology have raced to compete. Meta announced Llama. Google revealed Gemini. And Microsoft debuted Copilot.
All these AI features function in similar ways: After having been trained on mountains of text, videos, images, and more, these tools answer users’ questions in immediate and contextually relevant ways. Perhaps that means taking a popular recipe and making it vegetarian friendly. Or maybe that involves developing a workout routine for someone who is recovering from a new knee injury.
Whatever the ask, the more data that an AI tool has already digested, the better it can deliver answers.
Interestingly, romantic AI chatbots operate in almost the same way, as the more information that a user gives about themselves, the more intimate and personal the AI chatbot’s responses can appear.
But where any part of our online world demands more data, questions around privacy arise.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Zoë MacDonald, content creator for Privacy Not Included at Mozilla about romantic AI tools and how users can protect their privacy from ChatGPT and other AI chatbots.
When in doubt, MacDonald said, stick to a simple rule:
“I would suggest that people don’t share their personal information with an AI chatbot.”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
In the world of business cybersecurity, the powerful technology known as “Security Information and Event Management” is sometimes thwarted by the most unexpected actors—the very people setting it up.
Security Information and Event Management—or SIEM—is a term used to describe data-collecting products that businesses rely on to make sense of everything going on inside their network, in the hopes of catching and stopping cyberattacks. SIEM systems can log events and information across an entire organization and its networks. When properly set up, SIEMs can collect activity data from work-issued devices, vital servers, and even the software that an organization rolls out to its workforce. The purpose of all this collection is to catch what might easily be missed.
For instance, SIEMs can collect information about repeated login attempts occurring at 2:00 am from a set of login credentials that belong to an employee who doesn’t typically start their day until 8:00 am. SIEMs can also collect whether the login credentials of an employee with typically low access privileges are being used to attempt to log into security systems far beyond their job scope. SIEMs must also take in the data from an Endpoint Detection and Response (EDR) tool, and they can hoover up nearly anything that a security team wants—from printer logs, to firewall logs, to individual uses of PowerShell.
But just because a SIEM can collect something, doesn’t necessarily mean that it should.
Log activity for an organization of 1,000 employees is tremendous, and the collection of frequent activity could bog down a SIEM with noise, slow down a security team with useless data, and rack up serious expenses for a company.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Microsoft cloud solution architect Jess Dodson about how companies and organizations can set up, manage, and maintain their SIEMs, along with what advertising pitfalls to avoid when doing their shopping. Plus, Dodson warns about one of the simplest mistakes in trying to save budget—setting up arbitrary data caps on collection that could leave an organization blind.
“A small SMB organization … were trying to save costs, so they went and looked at what they were collecting and they found their biggest ingestion point,” Dodson said. “And what their biggest ingestion point was was their Windows security events, and then they looked further and looked for the event IDs that were costing them the most, and so they got rid of those.”
Dodson continued:
“Problem was the ones they got rid of were their Log On/Log Off events, which I think most people would agree is kind of important from a security perspective.”Tune in today to listen to the full conversation.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Full-time software engineer and part-time Twitch streamer Ali Diamond is used to seeing herself on screen, probably because she’s the one who turns the camera on.
But when Diamond received a Direct Message (DM) on Twitter earlier this year, she learned that her likeness had been recreated across a sample of AI-generated images, entirely without her consent.
On the AI art sharing platform Civitai, Diamond discovered that a stranger had created an “AI image model” that was fashioned after her. The model was available for download so that, conceivably, other members of the community could generate their own images of Diamond—or, at least, the AI version of her. To show just what the AI model was capable of, its creator shared a few examples of what he’d made: There was AI Diamond standing what looked at a music festival, AI Diamond with her head tilted up and smiling, and AI Diamond wearing, what the real Diamond would later describe, as an “ugly ass ****ing hat.”
AI image generation is seemingly lawless right now.
Popular AI image generators, like Stable Diffusion, Dall-E, and Midjourney, have faced valid criticisms from human artists that these generators are copying their labor to output derivative works, a sort of AI plagiarism. AI image moderation, on the other hand, has posed a problem not only for AI art communities, but for major social media networks, too, as anyone can seemingly create AI-generated images of someone else—without that person’s consent—and distribute those images online. It happened earlier this year when AI-generated, sexually explicit images of Taylor Swift were seen by millions of people on Twitter before the company took those images down.
In that instance, Swift had the support of countless fans who reported each post they found on Twitter that shared the images.
But what happens when someone has to defend themselves against an AI model made of their likeness, without their consent?
Today, on the Lock and Code podcast with host David Ruiz, we speak with Ali Diamond about finding an AI model of herself, what the creator had to say about making the model, and what the privacy and security implications are for everyday people whose likenesses have been stolen against their will.
For Diamond, the experience was unwelcome and new, as she’d never experimented using AI image generation on herself.
“I’ve never put my face into any of those AI services. As someone who has a love of cybersecurity and an interest in it… you’re collecting faces to do what?”Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
The podcast currently has 121 episodes available.
1,924 Listeners
349 Listeners
611 Listeners
358 Listeners
988 Listeners
381 Listeners
918 Listeners
7,666 Listeners
139 Listeners
299 Listeners
67 Listeners
114 Listeners
202 Listeners
9,035 Listeners
34 Listeners