Breakthrough in Algorithm Design: Memory May Outweigh Time
In a major breakthrough in computational complexity theory, MIT computer scientist Ryan Williams has shown that memory can sometimes outperform time in algorithm efficiency. His research, published in early 2025, demonstrates a method for transforming algorithms to use exponentially less memory—down to the square root of their running time—while still solving problems effectively. This result redefines long-standing beliefs about the tradeoff between time and space in computing, and pushes forward the boundary on one of computer science’s biggest open problems: the P vs PSPACE question.
In July 2024, MIT computer scientist Ryan Williams made a surprising discovery. He found evidence that memory in computing might be far more powerful than experts believed. His proof suggested that even a small amount of memory could sometimes be as valuable as having a lot of time to solve problems. At first, Williams thought this couldn’t be true. He set his proof aside, expecting to find an error later. But after reviewing it for hours, he couldn’t find anything wrong.
Over the next months, Williams refined his proof and shared it with other researchers. In February 2025, he published his findings, earning praise from computer science experts. Avi Wigderson, a leading theorist, called the result “amazing” and told Williams, “You blew my mind.”
In computing, time (how long an algorithm runs) and memory (space to store data) are the two most important resources. Usually, to solve tough problems, algorithms need memory roughly proportional to their running time. Experts believed this limit couldn’t be improved. But Williams’ proof showed a new method to transform any algorithm so that it uses much less memory, breaking decades of assumptions.
This breakthrough also led to another key result: proving that some problems can’t be solved within a given amount of time unless enough memory is used. While experts suspected this was true, no one could prove it until now. This could even help tackle the long-standing question of whether “P” (problems solvable quickly) is smaller than “PSPACE” (problems solvable with reasonable memory).
Williams’ path to this discovery stretches back decades. Growing up on a farm in Alabama, he was fascinated by computers from the age of seven. He pursued computational complexity theory—a field that studies how time and memory affect problem-solving—and eventually studied at Cornell University, where pioneers like Juris Hartmanis had laid the groundwork for modern complexity theory. Hartmanis and Richard Stearns had, in the 1960s, created formal definitions for time and space in computing, allowing researchers to classify problems into groups, or “complexity classes.” Among these, “P” covers problems solvable quickly, while “PSPACE” includes those solvable with reasonable memory, even if they take longer. Experts long believed PSPACE was more powerful because memory can be reused, while time cannot.
Attempts to connect time and space led to progress in 1975 when John Hopcroft, Wolfgang Paul, and Leslie Valiant created a universal method to turn algorithms into versions that use slightly less space. But their method had limits, and later work showed those limits couldn’t be bypassed—at least, not under common assumptions about how data is stored. For nearly 50 years, no one could improve on their result.
Williams’ breakthrough built on recent work by James Cook and Ian Mertz, who challenged those old assumptions. They showed that data could be stored more flexibly, like “squishing” pebbles together, to use far less space. Inspired by their approach, Williams developed a new universal method that dramatically reduced memory use—bringing it down to the square root of the algorithm’s running time. The new method is slower in practice but groundbreaking for theory.
When Williams confirmed his proof, it stunned the computer science world. It not only shattered a half-century-old barrier but also hinted at new paths to solving the P versus PSPACE mystery.
While Williams’ result doesn’t fully resolve that problem, it opens the door to further breakthroughs. His method shows that memory has much more computational power than time, and researchers may be able to build on it, step by step, to close the gap between these complexity classes.
As Williams himself says, “I can never prove precisely what I want, but often, what I prove is even better.”
Conclusion: Rethinking Memory, Time, and the Future of Algorithms:
Ryan Williams’ groundbreaking result in algorithm optimization and computational complexity theory challenges the traditional belief that time and memory must grow together for algorithmic efficiency. By proving that memory usage can be significantly minimized without linear penalties in time, his research introduces a new paradigm in the study of space-time tradeoffs. This not only deepens our understanding of how algorithms scale but also provides a promising path toward resolving the foundational P vs PSPACE problem—a cornerstone of theoretical computer science.