Share How AI Happens
Share to email
Share to Facebook
Share to X
By Sama
5
1010 ratings
The podcast currently has 104 episodes available.
Srujana is Vice President and Group Director at Walmart’s Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more.
Key Points From This Episode:
Quotes:
“By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]
“I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]
“Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]
“I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]
Links Mentioned in Today’s Episode:
Srujana Kaddevarmuth
Srujana Kaddevarmuth on X
Srujana Kaddevarmuth on LinkedIn
United Nations Association (UNA) San Francisco
The World in 2050
American INSIGHT
How AI Happens
Sama
Our guest goes on to share the different kinds of research they use for machine learning development before explaining why he is more conservative when it comes to driving generative AI use cases. He even shares some examples of generative use cases he feels are worthwhile. We hear about how these changes will benefit all UPS customers and how they avoid sharing private and non-compliant information with chatbots. Finally, Sunzay shares some advice for anyone wanting to become a leader in the tech world.
Key Points From This Episode:
Quotes:
“There’s a lot of complexities in the kind of global operations we are running on a day-to-day basis [at UPS].” — Sunzay Passari [0:04:35]
“There is no magic wand – so it becomes very important for us to better our resources at the right time in the right initiative.” — Sunzay Passari [0:09:15]
“Keep learning on a daily basis, keep experimenting and learning, and don’t be afraid of the failures.” — Sunzay Passari [0:22:48]
Links Mentioned in Today’s Episode:
Sunzay Passari on LinkedIn
UPS
How AI Happens
Sama
Martin shares what reinforcement learning does differently in executing complex tasks, overcoming feedback loops in reinforcement learning, the pitfalls of typical agent-based learning methods, and how being a robotic soccer champion exposed the value of deep learning. We unpack the advantages of deep learning over modeling agent approaches, how finding a solution can inspire a solution in an unrelated field, and why he is currently focusing on data efficiency. Gain insights into the trade-offs between exploration and exploitation, how Google DeepMind is leveraging large language models for data efficiency, the potential risk of using large language models, and much more.
Key Points From This Episode:
Quotes:
“You really want to go all the way down to learn the direct connections to actions only via learning [for training AI].” — Martin Riedmiller [0:07:55]
“I think engineers often work with analogies or things that they have learned from different [projects].” — Martin Riedmiller [0:11:16]
“[With reinforcement learning], you are spending the precious real robots time only on things that you don’t know and not on the things you probably already know.” — Martin Riedmiller [0:17:04]
“We have not achieved AGI (Artificial General Intelligence) until we have removed the human completely out of the loop.” — Martin Riedmiller [0:21:42]
Links Mentioned in Today’s Episode:
Martin Riedmiller
Martin Riedmiller on LinkedIn
Google DeepMind
RoboCup
How AI Happens
Sama
Jia shares the kinds of AI courses she teaches at Stanford, how students are receiving machine learning education, and the impact of AI agents, as well as understanding technical boundaries, being realistic about the limitations of AI agents, and the importance of interdisciplinary collaboration. We also delve into how Jia prioritizes latency at LiveX before finding out how machine learning has changed the way people interact with agents; both human and AI.
Key Points From This Episode:
Quotes:
“[The field of AI] is advancing so fast every day.” — Jia Li [0:03:05]
“It is very important to have more sharing and collaboration within the [AI field].” — Jia Li [0:12:40]
“Having an efficient algorithm [and] having efficient hardware and software optimization is really valuable.” — Jia Li [0:14:42]
Links Mentioned in Today’s Episode:
Jia Li on LinkedIn
LiveX AI
How AI Happens
Sama
Key Points From This Episode:
Quotes:
“Sometimes, people are very bad at asking for what they want. If you do any stint in, particularly, the more hardcore sales jobs out there, it's one of the things you're going to have to learn how to do to survive. You have to be uncomfortable and learn how to ask for things.” — @Reidoutloud_ [0:05:07]
“In order to really start to drive the accuracy of [our AI models], we needed to understand, what were users trying to do with this?” — @Reidoutloud_ [0:15:34]
“The people who being enabled the most with AI in the current stage are the technical tinkerers. I think a lot of these tools are too technical for average-knowledge workers.” — @Reidoutloud_ [0:28:32]
“Quick advice for anyone listening to this, do not start a company when you have your first kid! Horrible idea.” — @Reidoutloud_ [0:29:28]
Links Mentioned in Today’s Episode:
Reid Robinson on LinkedIn
Reid Robinson on X
Zapier
CocoNFT
How AI Happens
Sama
In this episode of How AI Happens, Justin explains how his project, Wondr Search, injects creativity into AI in a way that doesn’t alienate creators. You’ll learn how this new form of AI uses evolutionary algorithms (EAs) and differential evolution (DE) to generate music without learning from or imitating existing creative work. We also touch on the success of the six songs created by Wondr Search, why AI will never fully replace artists, and so much more. For a fascinating conversation at the intersection of art and AI, be sure to tune in today!
Key Points From This Episode:
Quotes:
“[Wondr Search] is definitely not an effort to stand up against generative AI that uses traditional ML methods. I use those a lot and there’s going to be a lot of good that comes from those – but I also think there’s going to be a market for more human-centric generative methods.” — Justin Kilb [0:06:12]
“The definition of intelligence continues to change as [humans and artificial systems] progress.” — Justin Kilb [0:24:29]
“As we make progress, people can access [AI] everywhere as long as they have an internet connection. That's exciting because you see a lot of people doing a lot of great things.” — Justin Kilb [0:26:06]
Links Mentioned in Today’s Episode:
Justin Kilb on LinkedIn
Wondr Search
‘Conserving Human Creativity with Evolutionary Generative Algorithms: A Case Study in Music Generation’
How AI Happens
Sama
Jacob shares how Gong uses AI, how it empowers its customers to build their own models, and how this ease of access for users holds the promise of a brighter future. We also learn more about the inner workings of Gong and how it trains its own models, why it’s not too interested in tracking soft skills right now, what we need to be doing more of to build more trust in chatbots, and our guest’s summation of why technology is advancing like a runaway train.
Key Points From This Episode:
Quotes:
“We don’t expect our customers to suddenly become data scientists and learn about modeling and everything, so we give them a very intuitive, relatively simple environment in which they can define their own models.” — @eckely [0:07:03]
“[Data] is not a huge obstacle to adopting smart trackers.” — @eckely [0:12:13]
“Our current vibe is there’s a limit to this technology. We are still unevolved apes.” — @eckely [0:16:27]
Links Mentioned in Today’s Episode:
Jacob Eckel on LinkedIn
Jacob Eckel on X
Gong
How AI Happens
Sama
Bobak further opines on the pros and cons of Perplexity and GPT 4.0, why the technology uses both models, the differences, and the pros and cons. Finally, our guest tells us why Brilliant Labs is open-source and reminds us why public participation is so important.
Key Points From This Episode:
Quotes:
“To have a second pair of eyes that can connect everything we see with all the information on the web and everything we’ve seen previously – is an incredible thing.” — @btavangar [0:13:12]
“For live web search, Perplexity – is the most precise [and] it gives the most meaningful answers from the live web.” — @btavangar [0:26:40]
“The [AI] space is changing so fast. It’s exciting [and] it’s good for all of us but we don’t believe you should ever be locked to one model or another.” — @btavangar [0:28:45]
Links Mentioned in Today’s Episode:
Bobak Tavangar on LinkedIn
Bobak Tavangar on X
Bobak Tavangar on Instagram
Brilliant Labs
Perplexity AI
GPT 4.0
How AI Happens
Sama
Andrew shares how generative AI is used by academic institutions, why employers and educators need to curb their fear of AI, what we need to consider for using AI responsibly, and the ins and outs of Andrew’s podcast, Insight x Design.
Key Points From This Episode:
Quotes:
“Once I learned about lakehouses and Apache Iceberg and how you can just do all of your work on top of the data lake itself, it really made my life a lot easier with doing real-time analytics.” — @insightsxdesign [0:04:24]
“Data analysts have always been expected to be technical, but now, given the rise of the amount of data that we’re dealing with and the limitations of data engineering teams and their capacity, data analysts are expected to do a lot more data engineering.” — @insightsxdesign [0:07:49]
“Keeping it simple and short is ideal when dealing with AI.” — @insightsxdesign [0:12:58]
“The purpose of higher education isn’t to get a piece of paper, it’s to learn something and to gain new skills.” — @insightsxdesign [0:17:35]
Links Mentioned in Today’s Episode:
Andrew Madson
Andrew Madson on LinkedIn
Andrew Madson on X
Andrew Madson on Instagram
Dremio
Insights x Design
Apache Iceberg
ChatGPT
Perplexity AI
Gemini
Anaconda
Peter Wang on LinkedIn
How AI Happens
Sama
Tom shares further thoughts on financing AI tech venture capital and whether or not data centers pose a threat to the relevance of the Cloud, as well as his predictions for the future of GPUs and much more.
Key Points From This Episode:
Quotes:
“Innovation is happening at such a deep technological level and that is at the core of machine learning models.” — @tomastungusz [0:03:37]
“Right now, we’re looking at where [is] there rote work or human toil that can be repeated with AI? That’s one big question where there’s not a really big incumbent.” — @tomastungusz [0:05:51]
“If you are the leader of a team or a department or a business unit or a company, you can not be in a position where you are caught off guard by AI. You need to be on the forefront.” — @tomastungusz [0:08:30]
“The dominant dynamic within consumer products is the least friction in a user experience always wins.” — @tomastungusz [0:14:05]
Links Mentioned in Today’s Episode:
Tomasz Tunguz
Tomasz Tunguz on LinkedIn
Tomasz Tunguz on X
Theory Ventures
How AI Happens
Sama
The podcast currently has 104 episodes available.
959 Listeners
445 Listeners
7,129 Listeners
328 Listeners