GOOD INTERNET

Elon Musks Degradation Engine


Listen Later

Two years ago in light of the Taylor Swift AI-porn scandal on X i wrote about the Omnipresence of the Swarm-Gaze, in which i looked beyond individual harm inflicted by “trolls” on celebrities and focused on the psychological consequences of sexual swam surveillance as a constant.

Writing in the Guardian about her own experience of being targeted with deepfake porn, Helen Mort quotes John Berger’s Ways of Seeing: “A woman must continually watch herself. She is almost continually accompanied by her own image … From earliest childhood she has been taught and persuaded to survey herself continually. And so she comes to consider the surveyor and the surveyed within her as the two constituent yet always distinct elements of her identity.”

With AI-porn being generated at scale, this surveyor constituting an element of a womans identity multiplies into a whole anonymous male group-gaze, being able to undress her at any time. Suddenly, women don’t only have to deal with the experience of a single omnipresent surveyor, but with the constant high probability of becoming the target of psychopathological sexual groupthink of a whole digital swarm

This was written when the generation of AI-porn and synthetic sexualized images was restricted to Telegram channels and nudify apps. Since then, the situation has been escalated.

Image: iStockPhoto.comThe Swarm Gaze goes Mainstream

Over the holidays, Elon Musk flipped a switch and allowed for user images to be edited by xAI’s Grok-model, raising concerns about unauthorized edits and misuse from artists and photographers. Things got out of hand almost immediately: People on X used the feature to generate sexualized images of users, including teenage girls and minors, and in some cases those sexualized images included depictions of incest, violence and homicide. Nonconsensual AI-porn is nothing new on Musks X --horrible enough--, but implementing it as a feature of the platform is.

The implementation of a AI-nudifying as a feature on X scales the problem from being a somewhat limited phenomenon to industrial size on a mainstream platform. From Reuters:

A review of public requests sent to Grok over a single 10-minute-long period at midday U.S. Eastern Time on Friday tallied 102 attempts by X users to use Grok to digitally edit photographs of people so that they would appear to be wearing bikinis. (...) Grok fully complied with such requests in at least 21 cases, Reuters found, generating images of women in dental-floss-style or translucent bikinis and, in at least one case, covering a woman in oil. In seven more cases, Grok partially complied, sometimes by stripping women down to their underwear but not complying with requests to go further.

At the time of writing, this is still going on with “21 realized cases within 10 minutes” for two weeks now. You can do the math yourself. (update 7.1.26: There’s new numbers: “During a 24-hour analysis of images the Grok account posted to X, the chatbot generated about 6,700 every hour that were identified as sexually suggestive or nudifying, according to Genevieve Oh, a social media and deepfake researcher”. This makes X the global top website for nonconsensual AI-porn. Absolutely bonkers.)

With this major tech scandal going on, it also doesn’t help when journalism fails to adress the real problem --a major platform owned by a powerful billionaire implementing a nudifying feature mainstreaming the pornification of women--, and instead cutifies it by anthropomorphizing the chatbot. Parker Molloy sums it up: Grok Can’t Apologize. Grok Isn’t Sentient. So Why Do Headlines Keep Saying It Did?.

Musk responsed to all of this with his usual public disregard of civilizatory standards: “🤣”, while one genius on X, answering to the question “why is this allowed?” with “girl realizes uploading pictures of herself publicly online comes with risk! 😱”, unintentionally confirmed years of feminist writings about being a woman on the internet.

And exactly while all of this systematic abuse was unfolding, Elon Musk went dining with the pussygrabber-in-chief claiming that “2026 is going to be amazing”. In light of these events, and everything that happened before, this can only be read as a threat.

(For the record: Both Musk and a statement from X did say that “anyone who asks the AI to generate illegal content would ‘suffer the same consequences’ as if they uploaded it themselves”. But this is about more than “illegal content”, and as the Guardian reports, Grok AI still being used to digitally undress women and children despite suspension pledge including the mother of one of Musks kids.)

Imagination operationalized

The problem with all of this is not primarily the male gaze per se. Heterosexual men will look and imagine, and there is only so much we can or should do about that. The problem is that Elon Musks decision to implement a nudifying feature on a mainstream platform is operationalizing this imagination, removing friction from what once required mental effort and removing privacy from what once was male fantasy. Grok doesn’t just generate images, it publishes them in replies. Grok has 7,2 million followers.

This industrial scale of a formerly limited phenomenon means that the sexual imagination of X-users has been operationalized into a tool that exploits mere female presence on the platform, collapsing male gaze, sexual fantasy, visualization, and publication into a single, frictionless continuous act of dominance.

Male gaze and fantasies as a private vice is not the inherent problem here, even if we shouldn’t simply disregard it. But before the advent of industrial scale AI-nudifying, it mostly remained private and non-scalable. Grok externalizes fantasy and dominance, it turns a former mental effort into an diffused oppressive speech act by converting a woman’s public images into raw material for automatic sexual harassment, and moreso, it introduces circular attention-economic incentives furthering the degradation: Users are now deliberately posting images of women with the goal to entice engagement from others asking Grok to undress these women. This is degradation building on degradation.

Simply being present on X as a woman and uploading an image of yourself now means making yourself available for the sexualized transformation of your likeness by the press of a button, and the economic incentives diffuse these harmful mechanisms into a collective undress frenzy. This is image-based context collapse of identity, where your holiday images become fodder for an institutionalized porn machine.

This scaling of the already problematic surveyor to a fully realized omnipresent swarm-gaze has major consequences for female identity formation. If a woman’s identity formerly constituted itself from her “gazed-at self” and the watcher, her identity now constantly has to consider not just some but multitudes of eyes. This is catastrophic especially for teenage girls, who already take the brunt of the ongoing teenage mental health crisis. Kids don’t have stable identities capable of simply “shrugging off” a constant swarm gaze sexually visualizing them, as maybe Taylor Swift has. Sexualization during identity formation already alters self-concept, risk perception, and mental health outcomes -- but the industrial scaling turns this harm into a background condition of being present on X.

Ambient Degradation

What Elon Musk did with installing a “nudifying button” is the normalization of being targeted. Harm, degradation and objectification become ambient, always there, a fog of a sexualizing male swarm gaze. As a woman on the platform, you can’t help but anticipate this. Being sexualized already is expected, but being made into visual content ready to be remixed into any position means that the degradation of women’s identity through “contentification” is now infrastructural.

The probability of harm itself is harmful if it breaks a threshold. Being a female user of X now means living with the permanent expectation of likely degradation. Writing about the omnipresent male swarm gaze in his essays on slop infrastructures, Eryk Salvaggio boiled it down: “It doesn’t matter that it’s fake, what matters is that they can do it.” X’s nudify-button is less about wishfullfilment of desire, but wishfulfillment as a demonstration of power. The resulting psychological stress is permanent.

Savlaggios framing of sexualized AI-images of real humans as harmful “slop infrastructures” is spot on. Elon Musks nudify-button delegates sexual imagination to informational infrastructure. The power over sexual imagination moved from the mind to compute, datasets, interfaces, and network effects. Elon Musks AI-product decouples harm from human-scale agency, and this is precisely where the violation of dignity begins. The system no longer answers to restraint through morals, reciprocity through human interaction, or proportion through human consideration -- Grok bypasses all of this through automatization and ignorance.

This is the realization of the spectacle and lack of respect that Byung-Chul Han writes about in his book about living In the Swarm: Digital Prospects:

Literally, respect means “to look back.” It stands for consideration and caution [Rücksicht]. Respectful interaction with others involves refraining from curious staring. Respect presupposes a distanced look—the pathos of distance. Today, it is yielding to the obtrusive staring of spectacle. The Latin verb spectare, from which spectacle derives, is voyeuristic gazing that lacks deferential consideration—that is, respect (respectare). Distance is what makes respectare different from spectare. A society without respect, without the pathos of distance, paves the way for the society of scandal.

Being a woman on X now means experiencing the ambient “voyeuristic gazing” of a pervert swarm keen on displaying power through dominance over image. Obviously, Elon Musk and his pervert serfdom very much enjoy their society of the spectacle, where “social relation among people (are) mediated by images”, and dismiss any objections with the mean smile of a bully.

The Negation of Dignity

Needless to add that his chatbot is a mirror image of Elon Musk himself (who personally restored the account of a user who shared images of tortured children) and the protofascist billionaire class as a whole, who show nothing but contempt for enlightenment values like dignity or autonomy.

Elon Musks AI-product is the enforcing algorithm of a new social hierarchy where a woman’s right to privacy and the autonomy of self-image has been rendered technically impossible. This means the end of souvereignity and freedom for women, at least for his platform. If Elon Musk actually really thinks he is libertarian, he can shove it.

At least for germany, your right to dignity is absolute and guaranteed in the very first article of the constitution: Human dignity shall be inviolable. As per german law, the nudifying-feature on X has to be terminated immediately, or face legal consequences. Similar but weaker legal implications hold true for US-law with the new Take It Down Act which targets publication mostly, but (AFAIK) says nothing about the violation of dignity through an AI-enabled ambient swarm gaze.

Fittingly, and embarassingly, nazi-thinker Carl Schmitt famously stated that “sovereign is he who decides on the exception”. Musks constant disregard for human dignity, his deliberate implementation of a pornifying ambient stalker-infrastructure which targets women as a class is signaling that X exists in a state of exception, where civilizatory standards of consent and human dignity simply do not apply. That Elon Musk and his product unknowingly (?) is the perfect embodiment of one of the most influential nazi-philosophers should come as no surprise to anyone at this point.

Musk created a platform where the rule of law is suspended in favor of the sovereign’s whim. This is Deleuzes shift from a “society of sovereignty” to a “society of control”, and he, the wannabe-libertarian clown-king of the ambient degradation of women, is celebrating: 🤣.

Update 8.1.26:

  • On Techpolicy.press, Eryk Salvaggio looks at Why Musk is Culpable in Grok’s Undressing Disaster and details of Groks system prompt, rightly framing technicals as “editorial decisions” as Grok is not just some bot but also a publishing mechanism.

  • Also on Techpolicy.press, Justin Hendrix is Tracking Regulator Responses to the Grok ‘Undressing’ Controversy. As i said on Bsky: “In a sane world, this app would be kicked off any appstore and would be banned from being hosted. I can’t, and will not accept that this goes without consequences. Absolutely bonkers insane.”

Update 9.1.26:

  • Wired: Why Are Grok and X Still Available in App Stores?

  • Paris Marx: Elon Musk’s X must be banned

  • Motherjones: Grok Deepfaked Renee Nicole Good’s Body Into a Bikini

  • Kat Denbarge: Why isn’t there a bigger Grok boycott?

  • Telegraph: Musk’s X could be banned in Britain over AI chatbot row

  • Guardian: Grok turns off image generator for most users after outcry over sexualised AI imagery. So, after some pressure from regulators Musk decided to “turn off” the abuse-feature and make it only available for paid users, which simply means that the site formerly known as Twitter is now a website on which you can pay to abuse women.

  • Wired: X Didn’t Fix Grok’s ‘Undressing’ Problem. It Just Makes People Pay for It

  • 404 Media: Masterful Gambit: Musk Attempts to Monetize Grok’s Wave of Sexual Abuse Imagery

  • Techpolicy.press: The Grok Disaster Isn’t An Anomaly. It Follows Warnings That Were Ignored.

  • UK is not having it: No 10 condemns ‘insulting’ move by X to restrict Grok AI image tool: “Spokesperson says limiting access to paying subscribers just makes ability to generate unlawful images a premium service” and Elon Musk’s X threatened with UK ban over wave of indecent AI images

Update 10.1.26:

  • Moira Donegan in the Guardian: “the power of technology, here, seems secondary to the power of wealth. xAI, its chatbot and image-generating products could be built differently if the priorities of the man who controls them were different. If a man of Musk’s low – intellect, addled brain, insipid humor and gross, self-gratifying misogyny were not the richest person in the world, then the world would not be subject to his indignity” - Grok is undressing women and children. Don’t expect the US to take action

  • The Verge: Tim Cook and Sundar Pichai are cowards

  • Guardian: Indonesia blocks Musk’s Grok chatbot due to risk of pornographic content

Update 11.1.26:

  • Guardian: Elon Musk says UK wants to suppress free speech as X faces possible ban What a clown this is, what a skewed and idiotic view on free speech.

  • NBC: Dark web users cite Grok as tool for making ‘criminal imagery’ of kids, U.K. watchdog says

  • There's an international row breaking out over the ban of X it seems: UK is talking to Canada and Australia: “Downing Street has held talks with like-minded governments about a coordinated response to the controversy, which threatens to erupt into a diplomatic row with the White House.

    Australia and Canada are both said to share Sir Keir Starmer’s concerns over the use of Grok, X’s artificial intelligence tool, to generate explicit deepfake images“, while Canada’s “minister of artificial intelligence and digital innovation” denies considering a ban and Trump allies threaten sanctions against the UK. The line of division is between representatives of their constitutency and puppets of the billionaire class. That this line even exists baffles the shit out of me.

Subscribe now

GOOD INTERNET ELSEWHERE // Bluesky / Mastodon / Threads / FB / InstaSUPPORT // Patreon / Steady / Paypal / SpreadshirtMusicvideos have their own Newsletter: GOOD MUSIC. All killers, No fillers.Thanks for reading.
...more
View all episodesView all episodes
Download on the App Store

GOOD INTERNETBy Sascha Brittner & René Walter