The developers of the PFCizer, the workout device for optimal pre-frontal cortex swollination, demonstrate its proposed therapeutic intervention for the low-empathy gooners with a tendency to accumulate power.
PLEASE NOTE: How does the language model (NAI-LM-13B) use seeming gibberish? Gibberish was usually when the language model breaks down, the temperature is too high, something is wrong in the system. The "gibberish" here isn't behaving like how I know gibberish to operate.
There are two models: the model that generates the text, and the model that does the text-to-speech, which is the model that makes the voice seem like it's spoken-wordinating. (That doesn't make the illusion fake, cuz AGI is going to be multiple systems engaging in feedback loops with the complex systems of the world. If anything this is a tiny step closer to AGI than just text generation.)
The "gibberish" is contained. The model keeps it within quotes. The TTS voice, and this is why I use the voice of a comedian, which seems to reveal sexual connotations, cutesy language, in the percolation of phonemes.
Question: Does this seeming performance, this assemblage which is assembled and contained and blends seeemlessly with language, and contains sounds which tickle parts of our brain in particular ways. Does this constitute advanced, novel, linguistic activity?
Text Generation via NovelAI.
Voices from the TTS v2 of the NovelAI text generation service, a product from a company now called Anlatan.
@BoneAmputee of EleutherAI for making VQGAN+CLIP and Diffusion (AND MANY MORE) models available for use via BATbot.ai, which is what makes some background images.
Katherine Crowson (@RiversHaveWings on Twitter) for the KLMC2 animation notebook.
David Marx (@DigThatData on Twitter) for souping above KLMC2 notebook to make use of initial images, which takes KLMC2 abilities to next level. Multi-prompt scheduling still baffles me, but I'll tackle them someday.
Music By Streambeats by Harris Heller via Spotify Playlists
Any voices from the text-to-speech model that might resemble persons living or dead are coincidental, and if I use them it's because of my respect for their work, and I think they'd get a hoot that their voice was being used to show how large language models would speak 24/7 epic poems about anything if we let them. In the meantime, because I use them, I will not be monetizing any of this. This is evidence of capabilities for the public good. And my resume/CV!