Share The Stephen Wolfram Podcast
Share to email
Share to Facebook
Share to X
By Wolfram Research
4.6
5353 ratings
The podcast currently has 421 episodes available.
Stephen Wolfram answers general questions from his viewers about science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: What is machine learning in layman's terms? - What do you think about opossums? Mine is getting big, it is over 3 pounds now! - What do you think about thermodynamic computing? As pursued by companies like Extropic AI and Normal Computing. - How does water vapor work? When the sun shines on the ocean it doesn't get to 100 degrees, so how does the water escape being a liquid and rise up to the clouds? - What's your intuition for the future of ML after your most recent blog post? - What is the simplest form of machine learning? The hardest? - What's the difference between volume, weight and mass?
Stephen Wolfram answers questions from his viewers about the history of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: Recent thoughts on history - Was SMP or Mathematica inspired by LISP and what are the pros and cons of LISP-like languages? - Was the decision to have Mathematica untyped unlike something like Lean (proof checker) a good decision for usability or would you do it differently today? - Type-checking always felt like dimensional analysis. - Was your idea to use "transformations on symbolic expressions" a sudden insight after reading, say, Schönfinkel on combinators, or did it follow from working out atoms of computation, something else? - What is the history of lazy evaluation? - Have you come up with any new theories of human reasoning from working on Mathematica and computation?
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: What do you view as the best strategies for reducing or eliminating hallucination/confabulation right now? Is there any chance that we'll be able to get something like confidence levels along with the responses we get from large language models? - I love this topic (fine tuning of LLMs); it's something I'm currently studying. - The AI Scientist is an LLM-based system that can conduct scientific research independently, from generating ideas to writing papers and even peer-reviewing its own work. How do you see this technology impacting the development of Wolfram|Alpha and other knowledge-based systems in the future? - It's fascinating the difference in response from LLMs/as to how you pose your questions. - I have found that giving key terms and then asking the LLM to take the "concepts" and relate them a particular way seems to work pretty well. - How we are going to formalize the language structures arising from this microinformatization, which was capable of creating such a semantic syntax that we had not observed through structuralism? - Why is being rude and "loud" to the model always the most efficient way to get what you want if the one-shot fails? I notice this applies to nearly all of them. I think it's also in the top prompt engineering "rules." I always feel bad even though the model has no feelings, but I need the proper reply in the least amounts of questions. - AI Scientist does what you're describing. The subtle difference is that it is generating plausible ideas, creating code experiments and then scoring them–question is whether this approach can/should be extended with Alpha? - How soon do you think we'll have LLMs that can retrain in real time? - What's your take on integrating memory into LLMs to enable retention across sessions? How could this impact their performance and capabilities? - Do you think computational analytics tools are keeping up with the recent AI trends? - Would it be interesting to let the LLM invent new tokens in order to compress its memories even further? - Philosophical question: if one posts a Wolfram-generated plot of a linear function to social media, for media is math, should it be tagged "made with AI"? It's a social media's opinion probably–just curious. A math plot is objective, so different than doing an AI face swap, for example. - For future archeologists–this stream was mostly human generated. - Professor_Neurobot: Despite my name, I promise I am not a bot.
Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa
Questions include: Can you tell us more about your book collection (or your artifact or art collection)? - How does a "dashboard/portal" webpage as you show sometimes, with lists and links to your projects, and such tools, fit in your workflow and daily routines? - Stephen, are you also the CTO of Wolfram Research? What are the characteristics of a good CTO? - Do you find video calls draining? - What is the Wolfram software continuity plan in the event something happens to you? You are so instrumental in the development of this software, so your absence would be a hard gap to fill. - Did you have a mentor while creating your business? Do you find mentors useful? - Faces distract from logic because we spend too long assessing people's emotions. - Just wanted to share my personal Mathematica "story." I learned to know Mathematica way back when it was running on DOS in text mode and switched into graphics mode when I wanted to plot something. Later I switched to an early Windows version. Back then Macs were too expensive for me, but I loved that Mathematica was a free integral part of Macs! - Could you share your methods for generating and keeping track of ideas? Do you have favorite techniques for being productive?
Stephen Wolfram answers general questions from his viewers about science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: How do galaxies form? - Is it true that animals can sense earthquakes? How? Why can't humans? - I always found it fascinating how birds pick up those fields to "get directions" on where to fly. - People with joint pain can often sense air pressure and humidity changes, and some develop a sense for when the weather will change as a result ("My knee hurts; the weather will change soon."). - What is the difference between speed and acceleration? - If energy is conserved, how we do we run out of it? - How do mutations or radiation effects affect biological evolution? - How size dependent is the universe? Could it be possible to create a mini-galaxy within a controlled environment here on Earth? - Is there fusion going on in a black hole?
Stephen Wolfram answers questions from his viewers about the history of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: What is the history of data visualization? Was the discipline only able to flourish relatively recently with the introduction of computers, or is there a deep and rich history of people creating pictures by hand to extract visual insights from abstract data? - Nikola Tesla was building a machine for the wireless transmission of electricity. It seems like we're getting to a place where we can beam solar energy down to Earth from solar-harvesting satellites. I'm curious what Stephen's take on this is and the timeline for this research/what is needed to make it a reality. - From your perspective, what is the importance of compression functions in computer science? - Do we know who designed written language? Or are there still missing pieces in history such that we can't properly map out the history of written work? - What is the stage of development/history around implementing cellular automata in hardware, such as quantum dot cellular automata? What large-scale, hardware-accelerated simulations would be interesting? - I'm curious about the history of eyeglasses. Why has the tech not seem to have advanced much?
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: How do you envision the future of physics-informed neuroscience? In particular, do you believe that despite the brain being a warm environment, quantum effects such as entanglement and superposition play a role in its function? Finally, do you think the concept of "quantum cognition" will remain more philosophical than scientific? - Are microtubules like electrochemical transistors? - Could the concrete Boolean arithmetic functional devices in our brains be affected by temperature, or is temperature one layer above that? - Which do you think would happen first: repairing brains naturally through natural science research or having the first "computer brain" transplant for those who suffer brain traumas? - I've heard AI should be able to develop treatments for cancer, but it will take decades of machine learning. What do you think could accelerate this learning process? - Maybe not a cure, but a control? Micro-monitoring and cancer-killing nanobots? - Will we ever perfect the human immune system? - Do you think that the relevance weight of the "microbiome" in medical science will increase in the future? - Maybe not an artificial brain, but what about artificial hearts? Would those be easier to have a technological implant vs. a natural one? Or even livers or kidneys? - In the future, hopefully we can have a machine/detector that can detect every atom or molecule in our bodies, and we can simulate solutions on a fast computer.
Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa
Questions include: What can you tell us about the next Wolfram Language release? What are you most excited to see added to the language? - Do you worry about the increasing appearance of incompetence in the world? - The version numbers do get fuzzy over time.... Are you thinking about using years instead? It would be clearer how old your version is.... - Any advice for autodidacts? How does one turn a personal curiosity or question about science into a structured project that can be published, as you often do? - Do you think AI will take away some human autonomy, ultimately making humans less intelligent overall as they rely on AI too much? - How do you think the patent system could be improved by AI? - I wonder if we will go through a cycle of trusting AI far too much for answers to our questions, and then when we get too much incorrect information we give up and move to a position of total distrust. Where do you think we will end up? - What has been your favorite place/country to visit? Is there someplace you have yet to visit that you would like to? - What is like starting one's first business? I'm just wondering because I don't personally know anyone who has a business. - On that topic, if you had to start an innovation-intensive business nowadays, requiring R&D before revenues, would you go the VC route or find ways to bootstrap it (and if so, how)? - Can AI systems be effectively applied to customer support roles, or is there too large of a security vulnerability? - Can you elaborate on your experience expanding your business and products to be used by others whose language(s) you don't speak? - What's your favorite new revelation or idea you read in your recent deep dive into philosophy? - How often do you revisit your own personal goals in life and in your career? What are some things you look at that make you feel accomplished, whether small or big?
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: Can AIs be creative? Should AIs rethink art? - What I think also matters is how creative the humans who write the code are. - Do you think art is a kind of multimodal/scale compression of very complex perceptions or ideas into a single form? Is art a way of coherently representing lots of unconscious computation? - There are fundamental principles in art, seen clearly through art history. The question is, how much of these fundamentals does the user have a grasp on, and how can they use that as leverage? - Could there be "laws of art" available to science, using AI? - AI art is already a form in itself. I am usually able to tell AI art from human art, but maybe that will be harder as tech progresses. - Interesting (the transferal of images without language serialization in between). Do you foresee something similar for complex abstract ideas embodied in human neural networks or firing patterns? - To what extent can AI follow the speed of our mental images that sometimes we can't follow up with, not only in terms of communicative language but in terms of recognition? - Keeping with the "future of art" theme, will there even be a place for human artists in the future, or will generative AI make it mostly obsolete, say decades from now? - Art is an "idea" in the artist's brain that hits the friction of the medium: an instrument in music, or paint or clay in visual art. AI art may become much more interesting once it has more actuators. - Do you believe neural interfacing can increase observer capacity? - The idea that brains operate on "millisecond" scale seems wrong. Brains are not digitized control loops; they are continuous systems. - Could Neuralink-type technologies, with near-speed-of-light transfer speeds between persons, make you think this latency could become almost negligible someday? - Apparently there is a vast difference in people's ability to visualize images in their minds. Interestingly, many artists seem to lack this ability. - During your discussion with a robot, the robot said it liked to tell jokes and make people laugh. How possible is it for robots to develop their own personalities outside of what they are programmed to do?
Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa
Questions include: I loved the discussion with a robot! Based on that talk, how do you imagine a future of robots in business? (Robot coworkers, bosses, assistants, etc.) Will robots be able to effectively communicate with their human companions and vice versa? - What business ideas can you think of for useful AI applications? How can we make building your own AI for your own purposes easy and affordable (such as having a bot that helps you find weekly coupons and savings for grocery trips, or for mapping ideal travel times)? - What do you think of "robots" remotely operated by humans as a precursor to autonomous robots? A new spin on outsourced blue-collar labor? - I believe that another crucial thing is that not only should technologies adapt to people's demands, but humans should quickly adapt to technology's demands in the field. Just recall how weird the computer mouse was for us 30–40 years ago. - It is very useful for us humans to understand what the AI knows when it outputs its LLM computations. - Maybe some layered hybrid architecture could work with LLMs providing the base, so to speak, while the other modules do more to correct what is there, perhaps? - What's the gold in AI, LLMs, etc.? Is there some simpler algorithm that can learn, instead of big neural networks? Like trying to find gold in a goldmine? - What do you make of the apparent disconnect between the heavy capital expenditure into AI infrastructure vs. the lagging revenues from applications at the present time? Are we in for a "2000 telecom/fiber"-like setback? - For full robot integration into human society, will we see robot "coffee shops" where robots will be able to go and refuel/charge? What business opportunities would working robots open up? - How was your annual summer of professoring? Kudos to all the student projects! - Will you let future robots enroll in the Summer School?
The podcast currently has 421 episodes available.
213 Listeners
26,022 Listeners
2,295 Listeners
435 Listeners
291 Listeners
924 Listeners
69 Listeners
3,971 Listeners
178 Listeners
460 Listeners
199 Listeners
86 Listeners
208 Listeners
388 Listeners
68 Listeners