Hey, so the question on everybody's mind today is, did the AI bubble just pop? I think no. Stock market thinks yes. We will go over why I think that's a little bit short-sighted and not really looking at the big picture. So, the way I frame it, here's the things I use AI to do in 2025, that weren't possible 24 months ago. Use it to write and edit blog posts, create music, generate fan art, summarize YouTube videos, do a whole lot of audio and video transcription. I use it for language learning. I use it to replace Google searches in a lot of cases. Use it to write a lot of database queries, basically , whenever I'm doing anything that's not in SQL. Use it to do back of the envelope calculations or I 'm trying to think through something. Use it to create music, translate manga, and even to code and build apps just like this one. I didn't use AI for any of those things two years ago, because it literally wasn't possible. The models weren't trained yet. The models didn't exist. We're only a couple years into this huge transformation and the amount of things you can apply AI to hasn't even scratched the surface yet. I can't use AI to use do my taxes. There's no legal advisor I can talk to powered by AI. The training data is all there. Case law is just 150 books on somebody's bookshelf that somebody has a decade reading and understanding. I'm not saying AI could take over the legal industry, but it seems like you could make knowledge of tax law or other law more accessible through a well-trained AI model, and that's not there yet. The way I'm framing it is demand for AI hasn't changed. The assertion is the headline that's running around the deep-seat claims that they've trained a model for $6 million that can help compete, open AI is cutting edge model. I think that's pretty much propaganda if you frame it like that. Their significant advancements are that each token activates fewer parameters and it uses mixture of experts. These are things that I believe in the next few days, what's going to come out is open AI is finally going to disclose how we made GPT-4 turbo, cheaper, and it runs on fewer resources than previous versions of GPT-4. We've been doing mixture of experts and we've been reducing the number of parameters activated per token generated as well. We just haven't disclosed those numbers previously. The us who have been working on this problem for years before this company even started, we were actually ahead on optimizing these models. The other thing is the demand for AI hasn't changed. It still takes hundreds of millions of dollars to build a data center to run any of these models. It's not like DeepSeek is suddenly going to run on a different chip. They're going to run on H100 and a data center in eight user servers that take huge amounts of electricity and human resources to build hyperscale data centers to deploy and operate and we're still not meeting demand for that. There's more people who want to type shit in the stable diffusion or chat GPT than capacity exists for right now. There's still new applications for the API for whatever model you choose. That's the other part of the DeepSeek story. How do you make the claim that you train a model for $6 million on half a billion dollars worth of GPU s that are escaped.