The Stephen Wolfram Podcast

Future of Science and Technology Q&A (August 16, 2024)


Listen Later

Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa


Questions include: ​​What do you view as the best strategies for reducing or eliminating hallucination/confabulation right now? Is there any chance that we'll be able to get something like confidence levels along with the responses we get from large language models? - ​​I love this topic (fine tuning of LLMs); it's something I'm currently studying. - The AI Scientist is an LLM-based system that can conduct scientific research independently, from generating ideas to writing papers and even peer-reviewing its own work. How do you see this technology impacting the development of Wolfram|Alpha and other knowledge-based systems in the future? - ​​It's fascinating the difference in response from LLMs/as to how you pose your questions. - ​​I have found that giving key terms and then asking the LLM to take the "concepts" and relate them a particular way seems to work pretty well. - How we are going to formalize the language structures arising from this microinformatization, which was capable of creating such a semantic syntax that we had not observed through structuralism? - Why is being rude and "loud" to the model always the most efficient way to get what you want if the one-shot fails? I notice this applies to nearly all of them. I think it's also in the top prompt engineering "rules." I always feel bad even though the model has no feelings, but I need the proper reply in the least amounts of questions. - AI Scientist does what you're describing. The subtle difference is that it is generating plausible ideas, creating code experiments and then scoring them–question is whether this approach can/should be extended with Alpha? - How soon do you think we'll have LLMs that can retrain in real time? - What's your take on integrating memory into LLMs to enable retention across sessions? How could this impact their performance and capabilities? - Do you think computational analytics tools are keeping up with the recent AI trends? - Would it be interesting to let the LLM invent new tokens in order to compress its memories even further? - Philosophical question: if one posts a Wolfram-generated plot of a linear function to social media, for media is math, should it be tagged "made with AI"? It's a social media's opinion probably–just curious. A math plot is objective, so different than doing an AI face swap, for example. - For future archeologists–this stream was mostly human generated. - Professor_Neurobot: Despite my name, I promise I am not a bot.

...more
View all episodesView all episodes
Download on the App Store

The Stephen Wolfram PodcastBy Wolfram Research

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

58 ratings


More shows like The Stephen Wolfram Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,226 Listeners

Closer To Truth by Closer To Truth

Closer To Truth

242 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,030 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,383 Listeners

The Quanta Podcast by Quanta Magazine

The Quanta Podcast

482 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,041 Listeners

Physics World Weekly Podcast by Physics World

Physics World Weekly Podcast

77 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,137 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

386 Listeners

Theories of Everything with Curt Jaimungal by Theories of Everything

Theories of Everything with Curt Jaimungal

460 Listeners

The Joy of Why by Steven Strogatz, Janna Levin and Quanta Magazine

The Joy of Why

498 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

120 Listeners

Training Data by Sequoia Capital

Training Data

40 Listeners