Share Paul, Deon & Tong
Share to email
Share to Facebook
Share to X
By Rich Tong
5
44 ratings
The podcast currently has 25 episodes available.
This is another sort of nerdy side note. If anyone is still watching, this section is just to give intuition on the basics of the hardware. There are lots of assumptions about GPUs and CPUs that I wanted to make sure people understood.But the basics are that CPUs are tuned for lots of branches and different workflows, while GPUs are tuned for lots of the same things like matric math. And because they are so fast, most of the job of the computer folks is "feeding the beast". That is caching the most frequently used information so they don't have to wait.
.Also, I'm quite proud of the HDR mix, using the latest OBS settings, producing in HDR in Final Cut Pro, and adjusting the video scope levels helps. The audio is a little hot and I'm sorry about that, I'll turn it down next time, I stoo much time in the red. My Scarlett needs to about 1 O'clock and it works.
See https://youtu.be/FupclouzYTI for a video version. And more details at https://tongfamily.com/2024/03/08/pod-rt3-ai-hardware-introduction/
Chapters:
OK, we are not experts nor PhDs so most of this is probably not technically correct, but the math is so complicated and the concepts so complicated, that we thought it would be good to just get some intuition on what is happening. So this is a quick talk that summarizes readings from so many different sources about the history of AI from the 1950s all the way to January 2024 or so.
You can go to YouTube to see the slides we are using at YouTube and more information at Tongfamily.com
Chapters:
OK, this is a reboot of systems. And getting ready for next year, the new pipeline is ready and the most important thing is that the audio finally sounds better. Stay tuned for more soon!
The AI Adventurers return! We are back after two months off retooling our brains and various ventures to be AI-Native. It's not easy and we talk about why. What does it mean to look at your software people and decide if they are type 1, 2 or 3. And how hard it is to make it all work.
Show notes at Tongfamily.com
A conversation with Devindra on the possibilities of ChatGPT and other Large Language Models for education in India and other Middle and Low-Income Countries. While these large language models today mainly work in the cloud and require high-speed internet connectivity, the promise is that smaller models that are more specialized have already been ported to MacBook Pro M1 and other laptops. And with advances like 4-bit quantization, there is the possibility they can even run on modern smartphones.The experience in China and in India is that parents will save inordinately to give their children a better life than theirs. Both Devindra and I are the beneficiaries of that thinking and we are forever grateful to our parents for their sacrifice. So, this means that there is a possibility that there could be a commercial incentive which is much better than constantly asking for charitable dollars.The new LLMs promise interactivity and individualized instruction that was impossible before. Early demonstrations of learning a foreign language are promising as is the student being able to ask the "why" of how Python is structured, not just the "get it done" without understanding that online education often creates.Finally, having an eye toward inclusivity for everyone of any gender, race or social status is something that needs to be baked in. We look forward to helping!
This time, we bumble through an episode featuring the amazing Mike Conte and we talk about what's wrong with this Podcast technically (see below), about ChatGPT, and about how you don't want to show others the Money. Mike is as always hilarious!
This time, we bumble through an episode featuring the amazing Mike Conte and we talk about what's wrong with this Podcast technically (see below), about ChatGPT, and about how you don't want to show others the Money. Mike is as always hilarious!
See https://tongfamily.com/2023/03/27/pt5-chatgpt-and-mike-conte-special-guest-and-tech-glitches-galore/ for show notes.
The podcast currently has 25 episodes available.