While Instagram's parent company, Meta, battles the government in court over antitrust claims; they're also battling to keep your kids safe online.
Or so they say. But, can we believe them?
That's the question I asked former CNET Editor and current CBS News Tech contributor, Ian Sherr, about Instagram's announcement this week that they're rolling out a new AI tool that can protect young teens from harmful content, by being able to detect whether they are lying about their age.
But, what they are not doing, is requiring parental consent for children under the age of 16 to use their platform, which is what the State of Ohio wanted, when legislators passed the Social Media Parental Notification Act last year.
That law, however got struck down just last week, when a Federal judge ruled it unconstitutional, because it violated First Amendment free speech protections for both children, and social media companies.
The ruling was made in connection with a lawsuit filed by NetChoice, an organization representing Google, Meta, Snapchat, X, and other major tech companies.
But Ohio is not alone when it comes to trying to pass laws to protect children from harm online. A number of other states have crafted similar laws, some of which include Texas, Louisiana, Utah, Tennessee, Georgia, and Florida. Those laws, however, are also facing legal challenges.
So, if laws aren't working (so far), and AI is at this point more of a promise than proven protection; what can parents do to ensure their kids safety on social media where it has been proven time and time again that they are often the targets of cyberbullies, scammers, pedophiles, sextortionists, and all manner of other online predators?
Find out. Listen now.