By Julian Kwasniewski
The Vatican's "expert on AI" has declared Artificial Intelligence "absolutely positive." I like it, too, but for very different reasons.
I was recently sent a podcast in which the editor of a Catholic journal surprisingly defends the practice of receiving Communion in the hand. Or maybe I should say "podcast" and "editor." Because the voice could have been his, but was actually something AI-generated that sounded just like him.
The advent of AI-generated sound, text, and video is going to change everything. Now you can't tell if a photo of your dead child is the result of a slick computer program or a real car accident. Or if a recording is your boss's voice scheming to fire you or a clever reconstruction based on company meetings. Heck, this article might not even be written by me; it could be simply the result of statistical analysis of Julian Kwasniewski's writings.
We must all wake up to the fact that the insistence on "evidence," which characterized the past few hundred years, is on the verge of being completely undermined by AI generated items. There's actually a word for it now: "deepfake."
Although it's possible that the Internet will become (as one observer says)"passé, spurious, and something only an obsessive bore would waste all of their free time on" - like bell bottoms, cigarette holders, or Pac-Man - I think it likely that there will be a more dramatic departure from the Internet. And the sooner the better.
AI's image-generating ability is about to separate the sheep from the goats, or the lovers of reality from those who don't care whether or not anything they sense, love, or reason about is real. For the first time in history, society will need to decide whether or not it values reality. People have debated metaphysics, religion, science, art, and many other things. But there's always been an assumption that they are arguing over a world of things and people. This world is on the verge of being replaced by a fantasy simulacrum made to deceive.
The latest in this regard are AI generated "girlfriends." Companies with spine-chilling names such as "Intimate," "Dream Girlfriend," or "Replika" have flooded social media with ads "for spicy selfies and hot role play."
The preview of the disturbingly named "Candy.ai" reads: "Dive into limitless conversations, tailor your experience, and forge unique connections. Discover a new realm of virtual companionship with Candy.ai."
Descending to a bottomless pit of irony, another site is called exactly what it lacks - "Anima" (i.e., "soul") - "the most advanced romance chatbot you've ever talked to."
It has monetized features, of course. You might "create" your girl, and then - oh no! - her romantic reply to you, next to her bra-clad picture, can't be read! "Find out what hides beneath. . . .Get unlimited access for £61.99/year." Or choose the lifetime subscription - really."
I've never clicked through to one of these sites. And don't want to. I can just feel the evil of what comes up in search results, or even in the images on news sites reporting on these "generators," which is the source of my information. I don't want to enter such a "Joyland" (another AI romance platform).
Fiction is now both stranger and scarier than truth. Perhaps we shouldn't be surprised.
But forget about whether or not you can control the "measurements" of your virtual girlfriend. Let's turn from fantastically frightening virtual romance to a crime scenario: what happens when you're accused of a crime and there are photos to "prove" it? Theft, assault, hate-speech: you name it.
AI could generate photo, video, or sound evidence that you were in a certain place at a certain time doing a particular action. They could also generate video interviews with "witnesses," letters of complaint, or phone calls. . .an infinite amount of "evidence." This is already happening in espionage efforts. Or maybe that was just an AI-generated article.
This website whichfaceisreal.com brings the sit...