The Cyberlaw Podcast

Does the government need a warrant to warn me about a cyberattack?

05.02.2023 - By Stewart BakerPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That’s the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they’ve been intercepted and stored, and particularly whether the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they’ve dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were also made public this week. It has also emerged that the government is using section 702 millions of times a year to identify the victims of cyberattacks (makes sense: foreign hackers are often a national security concern, and their whole business model is to use U.S. infrastructure to communicate [in a very special way] with U.S. networks.) So it turns out that all those civil libertarians who want to make it hard for the government to search 702 for the names of Americans are proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks! Justin Sherman covers China’s push to attack and even take over enemy (U.S.) satellites. This story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that the Defense Department has gotten a little too comfortable waging war against people who don’t really have an army, and that the Ukraine conflict shows how much tougher things get when there’s an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.) Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. We can look forward to an argument next year about the constitutionality of public officials blocking people who post mean comments on the officials’ Facebook pages.  Justin and I break down a story about whether Twitter is complying with more government demands under Elon Musk. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it’s much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don’t fight. But with criticism of Elon Musk’s Twitter already turned up to 11, that’s not likely to persuade him. Adam and I are impressed by Citizen Labs’ report on search censorship in China. We’d both kind of like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less transparency. If you suspect that’s because there’s more censorship than U.S. companies want to admit, here’s a straw in the wind: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling political speech than China’s main search engine, Baidu. This fits with my discovery that Bing’s Image Creator refused to construct an image using Taiwan’s flag. (It was OK using U.S. and German flags, but not China’s.) I also credit Microsoft for fixing that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags.  Adam covers the EU’s enthusiasm for regulating other countries’ companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American companies.  I cover a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the Environmental Protection Agency’s legally questionable effort to impose cybersecurity requirements on public water systems, using an “interpretation” of a law that doesn’t say much about cybersecurity into a law that never had it before. Michael Ellis and I cover the story detailing a former NSA director’s business ties to Saudi Arabia—and expand it to confess our unease at the number of generals and admirals moving from command of U.S. forces to a consulting gig with the countries they were just negotiating with. Recent restrictions on the revolving door for intelligence officers gets a mention. Adam covers the Quebec decision awarding $500 thousand to a man who couldn’t get Google to consistently delete a false story portraying him as a pedophile and conman. Justin and I debate whether Meta’s Reels feature has what it takes to be a plausible TikTok competitor? Justin is skeptical. I’m a little less so. Meta’s claims about the success of Reels aren’t entirely persuasive, but perhaps it’s too early to tell. The D.C. Circuit has killed off the state antitrust case trying to undo Meta’s long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn’t apply the same way to the Federal Trade Commission (FTC), which will get to pursue a lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into battle like conscripts in Bakhmut, I ask, when will the commission start recruiting in Russian prisons? That was fast. Adam tells us that the Brazil court order banning on Telegram because it wouldn’t turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn’t out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance.    And in another regulatory walkback, Italy’s privacy watchdog is letting ChatGPT back into the country. I suspect the Italian government of cutting a deal to save face as it abandons its initial position on ChatGPT’s scraping of public data to train the model. Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don’t see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education.  Download 455th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

More episodes from The Cyberlaw Podcast