Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Information warfare historically revolved around human conduits, published by trevor on August 29, 2023 on LessWrong.
Epistemic status: This is a central gear in any model of propaganda, public opinion, information warfare, PR, hybrid warfare, and any other adversarial information environment. Due to fundamental mathematical laws, like power law distribution, not all targets in the battlespace are created equal.
Generally, when people think about propaganda, censorship, public opinion, PR, and social psychology, they tend to think of the human receivers/viewers/readers/listeners as each being the ultimate goal - either the message is accepted, ignored, or rejected, either consciously or subconsciously, and each person either gains your side a single a point or loses you a point.
This is actually a bad model of large-scale influence. People with better models here are usually more inclined to use phrases like "steering public opinion", because the actual goal is to create human conduits, who internalize the message and spread it to their friends in personal conversations.
Messages that spread in that way are not only personalized influence by-default, even if initiated by early 20th century technology like radio or propaganda posters, but messages are also perceived as much more trustworthy when heard directly from the mouth of a friend than from one-to-many communication like broadcasting or posters, which were unambiguously spread by at least one elite with the resources to facilitate that and who you are not allowed to meet, even if they seem like a person just like you (although itiots might still fall for the "I'm an idiot like you" persona such as Donald Trump, Tucker Carlson, and particularly Alex Jones). Propaganda posters totally fell out of style, because it was plain as day that they were there to influence you; radio and television survived, including as a tool of state power, because they did not stick out so badly.
Censorship, on the other hand, is the prevention of this dynamic from emerging in the first place, which is likely a factor explaining why censorship is so widely accepted or tolerated by elites in authoritarian countries.
The battlespace
It's important to note that not all human conduits are created equal. This dynamic ultimately results in intelligent minds not merely relaying the message, but also using their human intelligence to add additional optimization power to the message's spread. On social media, this creates humans who write substantially smarter, more charismatic, eloquent, and persuasive one-liner posts and comments than those produceable by last-generation LLMs.
Furthermore, as an idea becomes more mainstream (and/or as the Overton window shrinks for rejecting the message), the conduits optimizing for the message's spread not only become greater in number, but also gain access to smarter speakers and writers. The NYT claims that Russian intelligence agencies deliberately send agents to the West to facilitate this process such as creating recruiting smart young people and creating grassroots movements that serve Russian interests; I've also previously had conversations with people who made strong arguments for and against the case that a large portion of the Vietnam antiwar movement was caused by Soviet agents orchestrating grassroots movements in-person in an almost identical way.
Slow takeoff
Social media and botnets, particularly acting autonomously due to increasingly powerful integrated AI, might be capable of navigating the information space as a battlespace, autonomously locating and focusing processing power on high-value targets who are predicted to be the best at adding a human touch to any particular message; a human touch that current or future LLMs might not be able to reliably generate on their own.
The technologic...