
Sign up to save your podcasts
Or
This week we talk about Studio Ghibli, Andrej Karpathy, and OpenAI.
We also discuss code abstraction, economic repercussions, and DOGE.
Recommended Book: How To Know a Person by David Brooks
Transcript
In late-November of 2022, OpenAI released a demo version of a product they didn’t think would have much potential, because it was kind of buggy and not very impressive compared to the other things they were working on at the time. This product was a chatbot interface for a generative AI model they had been refining, called ChatGPT.
This was basically just a chatbot that users could interact with, as if they were texting another human being. And the results were good enough—both in the sense that the bot seemed kinda sorta human-like, but also in the sense that the bot could generate convincing-seeming text on all sorts of subjects—that people went absolutely gaga over it, and the company went full-bore on this category of products, dropping an enterprise version in August the following year, a search engine powered by the same general model in October of 2024, and by 2025, upgraded versions of their core models were widely available, alongside paid, enhanced tiers for those who wanted higher-level processing behind the scenes: that upgraded version basically tapping a model with more feedstock, a larger training library and more intensive and refined training, but also, in some cases, a model that thinks longer, than can reach out and use the internet to research stuff it doesn’t already know, and increasingly, to produce other media, like images and videos.
During that time, this industry has absolutely exploded, and while OpenAI is generally considered to be one of the top dogs in this space, still, they’ve got enthusiastic and well-funded competition from pretty much everyone in the big tech world, like Google and Amazon and Meta, while also facing upstart competitors like Anthropic and Perplexity, alongside burgeoning Chinese competitors, like Deepseek, and established Chinese tech giants like Tencent and Baidu.
It’s been somewhat boggling watching this space develop, as while there’s a chance some of the valuations of AI-oriented companies are overblown, potentially leading to a correction or the popping of a valuation bubble at some point in the next few years, the underlying tech and the output of that tech really has been iterating rapidly, the state of the art in generative AI in particular producing just staggeringly complex and convincing images, videos, audio, and text, but the lower-tier stuff, which is available to anyone who wants it, for free, is also valuable and useable for all sorts of purposes.
Just recently, at the tail-end of March 2025, OpenAI announced new multimodal capabilities for its GPT-4o language model, which basically means this model, which could previously only generate text, can now produce images, as well.
And the model has been lauded as a sort of sea change in the industry, allowing users to produce remarkable photorealistic images just by prompting the AI—telling it what you want, basically—with usually accurate, high-quality text, which has been a problem for most image models up till this point. It also boasts the capacity to adjust existing images in all sorts of ways.
Case-in-point, it’s possible to use this feature to take a photo of your family on vacation and have it rendered in the style of a Studio Ghibli cartoon; Studio Ghibli being the Japanese animation studio behind legendary films like My Neighbor Totoro, Spirited Away, and Princess Mononoke, among others.
This is partly the result of better capabilities by this model, compared to its precursors, but it’s also the result of OpenAI loosening its policies to allow folks to prompt these models in this way; previously they disallowed this sort of power, due to copyright concerns. And the implications here are interesting, as this suggests the company is now comfortable showing that their models have been trained on these films, which has all sorts of potential copyright implications, depending on how pending court cases turn out, but also that they’re no long being as precious with potential scandals related to how their models are used.
It’s possible to apply all sorts of distinctive styles to existing images, then, including South Park and the Simpsons, but Studio Ghibli’s style has become a meme since this new capability was deployed, and users have applied it to images ranging from existing memes to their own self-portrait avatars, to things like the planes crashing into the Twin Towers on 9/11, JFK’s assassination, and famous mass-shootings and other murders.
It’s also worth noting that the co-founder of Studio Ghibli, Hayao Miyazaki, has called AI-generated artwork “an insult to life itself.” That so many people are using this kind of AI-generated filter on these images is a jarring sort of celebration, then, as the person behind that style probably wouldn’t appreciate it; many people are using it because they love the style and the movies in which it was born so much, though. An odd moral quandary that’s emerged as a result of these new AI-provided powers.
What I’d like to talk about today is another burgeoning controversy within the AI space that’s perhaps even larger in implications, and which is landing on an unprepared culture and economy just as rapidly as these new image capabilities and memes.
—
In February of 2025, the former AI head at Tesla, founding team member at OpenAI, and founder of an impending new, education-focused project called Eureka Labs named Andrej Karpathy coined the term ‘vibe coding’ to refer to a trend he’s noticed in himself and other developers, people who write code for a living, to develop new projects using code-assistant AI tools in a manner that essentially abstracts away the code, allowing the developer to rely more on vibes in order to get their project out the door, using plain English rather than code or even code-speak.
So while a developer would typically need to invest a fair bit of time writing the underlying code for a new app or website or video game, someone who’s vibe coding might instead focus on a higher, more meta-level of the project, worrying less about the coding parts, and instead just telling their AI assistant what they want to do. The AI then figures out the nuts and bolts, writes a bunch of code in seconds, and then the vibe coder can tweak the code, or have the AI tweak it for them, as they refine the concept, fix bugs, and get deeper into the nitty-gritty of things, all, again, in plain-spoken English.
There are now videos, posted in the usual places, all over YouTube and TikTok and such, where folks—some of whom are coders, some of whom are purely vibe coders, who wouldn’t be able to program their way out of a cardboard box—produce entire functioning video games in a matter of minutes.
These games typically aren’t very good, but they work. And reaching even that level of functionality would previously have taken days or weeks for an experienced, highly trained developer; now it takes mere minutes or moments, and can be achieved by the average, non-trained person, who has a fundamental understanding of how to prompt AI to get what they want from these systems.
Ethan Mollick, who writes a fair bit on this subject and who keeps tabs on these sorts of developments in his newsletter, One Useful Thing, documented his attempts to make meaning from a pile of data he had sitting around, and which he hadn’t made the time to dig through for meaning. Using plain English he was able to feed all that data to OpenAI’s Deep Research model, interact with its findings, and further home in on meaningful directions suggested by the data.
He also built a simple game in which he drove a firetruck around a 3D city, trying to put out fires before a competing helicopter could do the same. He spent a total of about $13 in AI token fees to make the game, and he was able to do so despite not having any relevant coding expertise.
A guy named Pieter Levels, who’s an experienced software engineer, was able to vibe-code a video game, which is a free-to-play, massively multiplayer online flying game, in just a month. Nearly all the code was written by Cursor and Grok 3, the first of which is a code-writing AI system, the latter of which is a ChatGPT-like generalist AI agent, and he’s been able to generate something like $100k per month in revenue from this game just 17 days, post-launch.
Now an important caveat here is that, first, this game received a lot of publicity, because Levels is a well-known name in this space, and he made this game as part of a ‘Vibe Coding Game Jam,’ which is an event focused on exactly this type of AI-augmented programming, in which all of the entrants had to be at least 80% AI generated. But he’s also a very skilled programmer and game-maker, so this isn’t the sort of outcome the average person could expect from these sorts of tools.
That said, it’s an interesting case study that suggests a few things about where this category of tools is taking us, even if it’s not representative for all programming spaces and would-be programmers.
One prediction that’s been percolating in this space for years, even before ChatGPT was released, but especially after generative AI tools hit the mainstream, is that many jobs will become redundant, and as a result many people, especially those in positions that are easily and convincingly replicated using such tools, will be fired. Because why would you pay twenty people $100,000 a year to do basic coding work when you can have one person working part-time with AI tools vibe-coding their way to approximately the same outcome?
It’s a fair question, and it’s one that pretty much every industry is asking itself right now. And we’ve seen some early waves of firings based on this premise, most of which haven’t gone great for the firing entity, as they’ve then had to backtrack and starting hiring to fill those positions again—the software they expected to fill the gaps not quite there yet, and their offerings suffering as a consequence of that gambit.
Some are still convinced this is the way things are going, though, including people like Elon Musk, who, as part of his Department of Government Efficiency, or DOGE efforts in the US government, is basically stripping things down to the bare-minimum, in part to weaken agencies he doesn’t like, but also, ostensibly at least, to reduce bloat and redundancy, the premise being that a lot of this work can be done by fewer people, and in some cases can be automated entirely using AI-based systems.
This was the premise of his mass-firings at Twitter, now X, when he took over, and while there have been a lot of hiccups and issues resulting from that decision, the company is managing to operate, even if less optimally than before, with about 20% the staff it had before he took over—something like 1,500 people compared to 7,500.
Now, there are different ways of looking at that outcome, and Musk’s activities since that acquisition will probably color some of our perceptions of his ambitions and level of success with that job-culling, as well. But the underlying theory that a company can do even 90% as well as it did before with just a fifth of the workforce is a compelling argument to many people, and that includes folks running governments, but also those in charge of major companies with huge rosters of employees that make up the vast majority of their operating expenses.
A major concern about all this, though, is that even if this theory works in broader practice, and all these companies and governments can function well enough with a dramatically reduced staff using AI tools to augment their capabilities and output, we may find ourselves in a situation in which the folks using said tools are more and more commodified—they’ll be less specialized and have less education and expertise in the relevant areas, so they can be paid less, basically, the tools doing more and the humans mostly being paid to prompt and manage them. And as a result we may find ourselves in a situation where these people don’t know enough to recognize when the AI are doing something wrong or weird, and we may even reach a point where the abstraction is so complete that very few humans even know how this code works, which leaves us increasingly reliant on these tools, but also more vulnerable to problems should they fail at a basic level, at which point there may not be any humans left who are capable of figuring out what went wrong, since all the jobs that would incentivize the acquisition of such knowledge and skill will have long since disappeared.
As I mentioned in the intro, these tools are being applied to images, videos, music, and everything else, as well. Which means we could see vibe artists, vibe designers, vibe musicians and vibe filmmakers. All of which is arguably good in the sense that these mediums become more accessible to more people, allowing more voices to communicate in more ways than ever before.
But it’s also arguably worrying in the sense that more communication might be filtered through the capabilities of these tools—which, by the way, are predicated on previous artists and writers and filmmakers’ work, arguably stealing their styles and ideas and regurgitating them, rather than doing anything truly original—and that could lead to less originality in these spaces, but also a similar situation in which people forget how to make their own films, their own art, their own writing; a capability drain that gets worse with each new generation of people who are incentivized to hand those responsibilities off to AI tools; we’ll all become AI prompters, rather than all the things we are, currently.
This has been the case with many technologies over the years—how many blacksmiths do we have in 2025, after all? And how many people actually hand-code the 1s and 0s that all our coding languages eventually write, for us, after we work at a higher, more human-optimized level of abstraction?
But because our existing economies are predicated on a certain type of labor and certain number of people being employed to do said labor, even if those concerns ultimately don’t end up being too big a deal, because the benefits are just that much more impactful than the downsides and other incentives to develop these or similar skills and understandings arise, it’s possible we could experience a moment, years or decades long, in which the whole of the employment market is disrupted, perhaps quite rapidly, leaving a lot of people without income and thus a lot fewer people who can afford the products and services that are generated more cheaply using these tools.
A situation that’s ripe with potential for those in a position to take advantage of it, but also a situation that could be devastating to those reliant on the current state of employment and income—which is the vast, vast majority of human beings on the planet.
Show Notes
https://en.wikipedia.org/wiki/X_Corp
https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/
https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/
https://www.wired.com/story/anthropic-benevolent-artificial-intelligence/
https://arstechnica.com/tech-policy/2025/03/what-could-possibly-go-wrong-doge-to-rapidly-rebuild-social-security-codebase/
https://en.wikipedia.org/wiki/Vibe_coding
https://www.newscientist.com/article/2473993-what-is-vibe-coding-should-you-be-doing-it-and-does-it-matter/
https://nmn.gl/blog/dangers-vibe-coding
https://x.com/karpathy/status/1886192184808149383
https://simonwillison.net/2025/Mar/19/vibe-coding/
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both/
https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/
https://www.creativebloq.com/3d/video-game-design/what-is-vibe-coding-and-is-it-really-the-future-of-app-and-game-development
https://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/
https://en.wikipedia.org/wiki/Studio_Ghibli
4.8
504504 ratings
This week we talk about Studio Ghibli, Andrej Karpathy, and OpenAI.
We also discuss code abstraction, economic repercussions, and DOGE.
Recommended Book: How To Know a Person by David Brooks
Transcript
In late-November of 2022, OpenAI released a demo version of a product they didn’t think would have much potential, because it was kind of buggy and not very impressive compared to the other things they were working on at the time. This product was a chatbot interface for a generative AI model they had been refining, called ChatGPT.
This was basically just a chatbot that users could interact with, as if they were texting another human being. And the results were good enough—both in the sense that the bot seemed kinda sorta human-like, but also in the sense that the bot could generate convincing-seeming text on all sorts of subjects—that people went absolutely gaga over it, and the company went full-bore on this category of products, dropping an enterprise version in August the following year, a search engine powered by the same general model in October of 2024, and by 2025, upgraded versions of their core models were widely available, alongside paid, enhanced tiers for those who wanted higher-level processing behind the scenes: that upgraded version basically tapping a model with more feedstock, a larger training library and more intensive and refined training, but also, in some cases, a model that thinks longer, than can reach out and use the internet to research stuff it doesn’t already know, and increasingly, to produce other media, like images and videos.
During that time, this industry has absolutely exploded, and while OpenAI is generally considered to be one of the top dogs in this space, still, they’ve got enthusiastic and well-funded competition from pretty much everyone in the big tech world, like Google and Amazon and Meta, while also facing upstart competitors like Anthropic and Perplexity, alongside burgeoning Chinese competitors, like Deepseek, and established Chinese tech giants like Tencent and Baidu.
It’s been somewhat boggling watching this space develop, as while there’s a chance some of the valuations of AI-oriented companies are overblown, potentially leading to a correction or the popping of a valuation bubble at some point in the next few years, the underlying tech and the output of that tech really has been iterating rapidly, the state of the art in generative AI in particular producing just staggeringly complex and convincing images, videos, audio, and text, but the lower-tier stuff, which is available to anyone who wants it, for free, is also valuable and useable for all sorts of purposes.
Just recently, at the tail-end of March 2025, OpenAI announced new multimodal capabilities for its GPT-4o language model, which basically means this model, which could previously only generate text, can now produce images, as well.
And the model has been lauded as a sort of sea change in the industry, allowing users to produce remarkable photorealistic images just by prompting the AI—telling it what you want, basically—with usually accurate, high-quality text, which has been a problem for most image models up till this point. It also boasts the capacity to adjust existing images in all sorts of ways.
Case-in-point, it’s possible to use this feature to take a photo of your family on vacation and have it rendered in the style of a Studio Ghibli cartoon; Studio Ghibli being the Japanese animation studio behind legendary films like My Neighbor Totoro, Spirited Away, and Princess Mononoke, among others.
This is partly the result of better capabilities by this model, compared to its precursors, but it’s also the result of OpenAI loosening its policies to allow folks to prompt these models in this way; previously they disallowed this sort of power, due to copyright concerns. And the implications here are interesting, as this suggests the company is now comfortable showing that their models have been trained on these films, which has all sorts of potential copyright implications, depending on how pending court cases turn out, but also that they’re no long being as precious with potential scandals related to how their models are used.
It’s possible to apply all sorts of distinctive styles to existing images, then, including South Park and the Simpsons, but Studio Ghibli’s style has become a meme since this new capability was deployed, and users have applied it to images ranging from existing memes to their own self-portrait avatars, to things like the planes crashing into the Twin Towers on 9/11, JFK’s assassination, and famous mass-shootings and other murders.
It’s also worth noting that the co-founder of Studio Ghibli, Hayao Miyazaki, has called AI-generated artwork “an insult to life itself.” That so many people are using this kind of AI-generated filter on these images is a jarring sort of celebration, then, as the person behind that style probably wouldn’t appreciate it; many people are using it because they love the style and the movies in which it was born so much, though. An odd moral quandary that’s emerged as a result of these new AI-provided powers.
What I’d like to talk about today is another burgeoning controversy within the AI space that’s perhaps even larger in implications, and which is landing on an unprepared culture and economy just as rapidly as these new image capabilities and memes.
—
In February of 2025, the former AI head at Tesla, founding team member at OpenAI, and founder of an impending new, education-focused project called Eureka Labs named Andrej Karpathy coined the term ‘vibe coding’ to refer to a trend he’s noticed in himself and other developers, people who write code for a living, to develop new projects using code-assistant AI tools in a manner that essentially abstracts away the code, allowing the developer to rely more on vibes in order to get their project out the door, using plain English rather than code or even code-speak.
So while a developer would typically need to invest a fair bit of time writing the underlying code for a new app or website or video game, someone who’s vibe coding might instead focus on a higher, more meta-level of the project, worrying less about the coding parts, and instead just telling their AI assistant what they want to do. The AI then figures out the nuts and bolts, writes a bunch of code in seconds, and then the vibe coder can tweak the code, or have the AI tweak it for them, as they refine the concept, fix bugs, and get deeper into the nitty-gritty of things, all, again, in plain-spoken English.
There are now videos, posted in the usual places, all over YouTube and TikTok and such, where folks—some of whom are coders, some of whom are purely vibe coders, who wouldn’t be able to program their way out of a cardboard box—produce entire functioning video games in a matter of minutes.
These games typically aren’t very good, but they work. And reaching even that level of functionality would previously have taken days or weeks for an experienced, highly trained developer; now it takes mere minutes or moments, and can be achieved by the average, non-trained person, who has a fundamental understanding of how to prompt AI to get what they want from these systems.
Ethan Mollick, who writes a fair bit on this subject and who keeps tabs on these sorts of developments in his newsletter, One Useful Thing, documented his attempts to make meaning from a pile of data he had sitting around, and which he hadn’t made the time to dig through for meaning. Using plain English he was able to feed all that data to OpenAI’s Deep Research model, interact with its findings, and further home in on meaningful directions suggested by the data.
He also built a simple game in which he drove a firetruck around a 3D city, trying to put out fires before a competing helicopter could do the same. He spent a total of about $13 in AI token fees to make the game, and he was able to do so despite not having any relevant coding expertise.
A guy named Pieter Levels, who’s an experienced software engineer, was able to vibe-code a video game, which is a free-to-play, massively multiplayer online flying game, in just a month. Nearly all the code was written by Cursor and Grok 3, the first of which is a code-writing AI system, the latter of which is a ChatGPT-like generalist AI agent, and he’s been able to generate something like $100k per month in revenue from this game just 17 days, post-launch.
Now an important caveat here is that, first, this game received a lot of publicity, because Levels is a well-known name in this space, and he made this game as part of a ‘Vibe Coding Game Jam,’ which is an event focused on exactly this type of AI-augmented programming, in which all of the entrants had to be at least 80% AI generated. But he’s also a very skilled programmer and game-maker, so this isn’t the sort of outcome the average person could expect from these sorts of tools.
That said, it’s an interesting case study that suggests a few things about where this category of tools is taking us, even if it’s not representative for all programming spaces and would-be programmers.
One prediction that’s been percolating in this space for years, even before ChatGPT was released, but especially after generative AI tools hit the mainstream, is that many jobs will become redundant, and as a result many people, especially those in positions that are easily and convincingly replicated using such tools, will be fired. Because why would you pay twenty people $100,000 a year to do basic coding work when you can have one person working part-time with AI tools vibe-coding their way to approximately the same outcome?
It’s a fair question, and it’s one that pretty much every industry is asking itself right now. And we’ve seen some early waves of firings based on this premise, most of which haven’t gone great for the firing entity, as they’ve then had to backtrack and starting hiring to fill those positions again—the software they expected to fill the gaps not quite there yet, and their offerings suffering as a consequence of that gambit.
Some are still convinced this is the way things are going, though, including people like Elon Musk, who, as part of his Department of Government Efficiency, or DOGE efforts in the US government, is basically stripping things down to the bare-minimum, in part to weaken agencies he doesn’t like, but also, ostensibly at least, to reduce bloat and redundancy, the premise being that a lot of this work can be done by fewer people, and in some cases can be automated entirely using AI-based systems.
This was the premise of his mass-firings at Twitter, now X, when he took over, and while there have been a lot of hiccups and issues resulting from that decision, the company is managing to operate, even if less optimally than before, with about 20% the staff it had before he took over—something like 1,500 people compared to 7,500.
Now, there are different ways of looking at that outcome, and Musk’s activities since that acquisition will probably color some of our perceptions of his ambitions and level of success with that job-culling, as well. But the underlying theory that a company can do even 90% as well as it did before with just a fifth of the workforce is a compelling argument to many people, and that includes folks running governments, but also those in charge of major companies with huge rosters of employees that make up the vast majority of their operating expenses.
A major concern about all this, though, is that even if this theory works in broader practice, and all these companies and governments can function well enough with a dramatically reduced staff using AI tools to augment their capabilities and output, we may find ourselves in a situation in which the folks using said tools are more and more commodified—they’ll be less specialized and have less education and expertise in the relevant areas, so they can be paid less, basically, the tools doing more and the humans mostly being paid to prompt and manage them. And as a result we may find ourselves in a situation where these people don’t know enough to recognize when the AI are doing something wrong or weird, and we may even reach a point where the abstraction is so complete that very few humans even know how this code works, which leaves us increasingly reliant on these tools, but also more vulnerable to problems should they fail at a basic level, at which point there may not be any humans left who are capable of figuring out what went wrong, since all the jobs that would incentivize the acquisition of such knowledge and skill will have long since disappeared.
As I mentioned in the intro, these tools are being applied to images, videos, music, and everything else, as well. Which means we could see vibe artists, vibe designers, vibe musicians and vibe filmmakers. All of which is arguably good in the sense that these mediums become more accessible to more people, allowing more voices to communicate in more ways than ever before.
But it’s also arguably worrying in the sense that more communication might be filtered through the capabilities of these tools—which, by the way, are predicated on previous artists and writers and filmmakers’ work, arguably stealing their styles and ideas and regurgitating them, rather than doing anything truly original—and that could lead to less originality in these spaces, but also a similar situation in which people forget how to make their own films, their own art, their own writing; a capability drain that gets worse with each new generation of people who are incentivized to hand those responsibilities off to AI tools; we’ll all become AI prompters, rather than all the things we are, currently.
This has been the case with many technologies over the years—how many blacksmiths do we have in 2025, after all? And how many people actually hand-code the 1s and 0s that all our coding languages eventually write, for us, after we work at a higher, more human-optimized level of abstraction?
But because our existing economies are predicated on a certain type of labor and certain number of people being employed to do said labor, even if those concerns ultimately don’t end up being too big a deal, because the benefits are just that much more impactful than the downsides and other incentives to develop these or similar skills and understandings arise, it’s possible we could experience a moment, years or decades long, in which the whole of the employment market is disrupted, perhaps quite rapidly, leaving a lot of people without income and thus a lot fewer people who can afford the products and services that are generated more cheaply using these tools.
A situation that’s ripe with potential for those in a position to take advantage of it, but also a situation that could be devastating to those reliant on the current state of employment and income—which is the vast, vast majority of human beings on the planet.
Show Notes
https://en.wikipedia.org/wiki/X_Corp
https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/
https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/
https://www.wired.com/story/anthropic-benevolent-artificial-intelligence/
https://arstechnica.com/tech-policy/2025/03/what-could-possibly-go-wrong-doge-to-rapidly-rebuild-social-security-codebase/
https://en.wikipedia.org/wiki/Vibe_coding
https://www.newscientist.com/article/2473993-what-is-vibe-coding-should-you-be-doing-it-and-does-it-matter/
https://nmn.gl/blog/dangers-vibe-coding
https://x.com/karpathy/status/1886192184808149383
https://simonwillison.net/2025/Mar/19/vibe-coding/
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both/
https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/
https://www.creativebloq.com/3d/video-game-design/what-is-vibe-coding-and-is-it-really-the-future-of-app-and-game-development
https://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/
https://en.wikipedia.org/wiki/Studio_Ghibli
2,146 Listeners
30,850 Listeners
32,121 Listeners
1,711 Listeners
43,343 Listeners
11,701 Listeners
10,655 Listeners
23,310 Listeners
1,290 Listeners
24 Listeners
1,266 Listeners
2,097 Listeners
570 Listeners
2,103 Listeners
11 Listeners