“The Joy of Why” is a Quanta Magazine podcast about curiosity and the pursuit of knowledge. The mathematician and author Steven Strogatz and the cosmologist and author Janna Levin take turns interviewing leading researchers about the great scientific and mathematical questions of our time. New episodes are released every other Wednesday.
Quanta Magazine is a Pulitzer Prize–winning, editorially independent online publication launched and supported by the Simons Foundation to illuminate big ideas in science and math through public service journalism. Quanta’s reporters and editors focus on developments in mathematics, theoretical physics, theoretical computer science and the basic life sciences, emphasizing timely, accurate, in-depth and well-crafted articles for its broad discerning audience. In 2023, Steven Strogatz received a National Academies Eric and Wendy Schmidt Award for Excellence in Science Communications partly for his work on “The Joy of Why.”
“The Joy of Why” is a Quanta Magazine podcast about curiosity and the pursuit of knowledge. The mathematician and author Steven Strogatz and the cosmologist and author Janna Levin take turns interviewing leading researchers about the great scientific and mathematical questions of our time. New episodes are released every other Wednesday.
Quanta Magazine is a Pulitzer Prize–winning, editorially independent online publication launched and supported by the Simons Foundation to illuminate big ideas in science and math through public service journalism. Quanta’s reporters and editors focus on developments in mathematics, theoretical physics, theoretical computer science and the basic life sciences, emphasizing timely, accurate, in-depth and well-crafted articles for its broad discerning audience. In 2023, Steven Strogatz received a National Academies Eric and Wendy Schmidt Award for Excellence in Science Communications partly for his work on “The Joy of Why.”
Large language models (LLMs) are becoming increasingly more impressive at creating human-like text and answering questions, but whether they can understand the meaning of the words they generate is a hotly debated issue. A big challenge is that LLMs are black boxes; they can make predictions and decisions on the order of words, but they cannot communicate the reasons for doing so.
Ellie Pavlick at Brown University is building models that could help understand how LLMs process language compared with humans. In this episode of The Joy of Why, Pavlick discusses what we know and don’t know about LLM language processing, how their processes differ from humans, and how understanding LLMs better could also help us better appreciate our own capacity for knowledge and creativity.