Generative AI Group Podcast

Week of 2025-01-19


Listen Later

Alex: Hello and welcome to The Generative AI Group Digest for the week of 19 Jan 2025!
Maya: We’re Alex and Maya.
Alex: First up, we’re talking about some exciting new research in diffusion models. Pulkit shared a cool paper from Angjoo’s lab on decentralized diffusion models.
Maya: Decentralized diffusion? How does that differ from normal diffusion models?
Alex: Great question! Instead of one big model, they train multiple expert models separately that don’t communicate. Then at inference, a lightweight router combines their outputs.
Maya: So kind of like a team of specialists collaborating after training separately?
Alex: Exactly! Pulkit pointed out this approach beats monolithic models for the same compute and uses sparse computation effectively. ASK Sathvik was curious if it generalizes beyond certain datasets but recognized the breakthrough potential, especially in math and coding.
Alex (reading): Pulkit said, “They propose training a series of expert diffusion models, each in communication isolation from one another... This outperforms monolithic diffusion models FLOP-for-FLOP.”
Maya: This could mean more efficient and customizable generative models in the future. Plus, less resource usage during training and testing.
Alex: Right. And Paras raised a good point about how the heavy lifting might still be done by existing powerful models like Qwen via rejection sampling. It’s an evolving space.
Maya: Next, let’s move on to AI’s impact in supply chains and quick commerce.
Alex: Cheril sparked this by asking about the real business value of AI/ML in quick commerce like Blinkit or Zepto.
Maya: I wonder, is generative AI helping there yet or is it mostly traditional machine learning?
Alex: Jyotirmay clarified that ML is very useful for forecasting, inventory, and route optimization. Generative AI and LLMs aren’t heavily used there—yet.
Maya: Interesting. So the AI that’s directly generating text probably isn’t central to logistics, but machine learning optimizing delivery routes is huge.
Alex: Sudhanshu added that a big challenge is data quality—often late or incorrect data messes up optimizations. This is a common problem in logistics.
Maya (reading): Cheril shared a Swiggy blog: “ML powers ‘When is my order coming?’ predictions, ETAs, and rider supply planning.”
Alex: It’s clear AI is already deeply integrated in daily delivery operations, mainly ML-based.
Maya: Next, let’s chat about changes in job skills with AI’s rise.
Alex: Paras shared a thought from Twyler Cowan, who is shifting from content creation to ecosystem building because AI now handles much of the content work.
Maya: That reminds me of how technology shifts the value from routine tasks to higher-level skills like connecting people or strategic thinking.
Alex: Precisely! ASK Sathvik agreed and noted that as AI automates 80% of a job, the remaining 20% becomes the bottleneck. He also doubts there will be a sudden intelligence explosion due to limits like hardware and biology.
Maya: So humans will likely still play a key role adapting and managing AI, just in different ways.
Alex (reading): Paras said, “The bottlenecks increase in marginal value... He’s reducing content creation and increasing connective roles.”
Maya: It’s a fascinating perspective on evolving work with AI.
Alex: Next up—let’s talk about Indian AI research and infrastructure.
Maya: There’s lots of discussion about why India lags in AI foundational research despite its talent pool.
Alex: Paras and Bharat both noted factors like limited government funding, scarcity of GPUs, and the need to shift from use-case focus to fundamental research.
Maya: Also, many Indian researchers move abroad for better resources and opportunities.
Alex (reading): Cheril shared that a transformer paper author said in India “the dream was to do MS and work for a great company abroad,” with less focus on groundbreaking research locally.
Maya: But there is a silver lining—some labs in India collaborate with DeepMind and Microsoft, so the potential exists.
Alex: Indeed. Paras also stressed that deep research can happen with less compute if focused on algorithmic breakthroughs.
Maya: This is a great call to build smarter, not just bigger, models—and to cultivate environments supporting that.
Alex: Moving on—there’s news about ChatGPT’s new task scheduling features.
Maya: Oh yes, Pratik said ChatGPT now supports scheduled tasks, which opens a lot of new possibilities.
Alex: Right. You can ask it to do things like “Give me a new recipe every day without repeats.” Several people noted the model uses previous context in sessions.
Maya (reading): Tp53 said, “Higher performance doesn’t always mean higher market adoption; UI/UX often dictates success.”
Alex: That’s an important insight. In mission-critical domains like healthcare, smooth workflows and integration can matter more than raw model accuracy.
Maya: And Paras chimed in to reimagine UX beyond chat windows—maybe dynamically generating context-relevant interfaces or voice commands.
Alex: A good example is ChatGPT Canvas for writing, which revolutionizes text creation UX.
Maya: Exciting times ahead for AI-human interaction design.
Alex: Speaking of interaction, let’s touch on AI therapy and mental health.
Maya: Lots of conversation around LLMs as virtual therapists or coaching companions, with examples like Claude acting as Carl Jung.
Alex: ASK Sathvik shared how talking to Claude helped him get comfortable with fears, illustrating AI’s unique value in mental health.
Maya: There’s debate around trust and empathy—while AI can be anonymous and accessible, genuine empathy from shared experience is hard to replicate.
Alex: Luv Singh stressed privacy too, noting the importance of running models locally to protect sensitive data.
Maya (reading): “Over 33% of Indian students preferred anonymous AI chats over human counselors,” said Luv.
Alex: This highlights AI’s potential to fill mental health gaps especially where stigma or access is an issue.
Maya: We should also mention ongoing efforts in speech recognition tailored for Indian accents and educational use.
Alex: Yes, Tanmay and Wadhwani AI shared about building models that understand and don’t autocorrect typical pronunciation errors—a big advance for language learning.
Maya: And data scarcity for Indian dialects remains a challenge, but crowd-sourced initiatives like AI4Bharat help.
Alex: Lastly, let’s share a listener tip from Maya.
Maya: Here’s a pro tip inspired by the AI therapy chat: If you’re ever feeling stuck emotionally or creatively, try having a conversation with an LLM like Claude or ChatGPT, asking it to take on a persona—like a therapist or coach.
Maya: Alex, how would you use that in your day?
Alex: I’d journal with an AI buddy to get fresh perspectives and spot patterns I might miss. It’s like having a reflective coach anytime.
Maya: That’s a great way to extend human support with AI’s always-on availability.
Alex: As we wrap up, I want to remind listeners—AI is becoming a tool to amplify human skills, not just replace them. It’s about working together.
Maya: And don’t forget—building good AI experiences means balancing powerful models with smart UX and understanding human needs deeply.
Maya: That’s all for this week’s digest.
Alex: See you next time!
...more
View all episodesView all episodes
Download on the App Store

Generative AI Group PodcastBy