Subscribe now

Physics

Key mathematical tool sees first advance in 24 years

By Jacob Aron

9 December 2011

One of the most fundamental mathematical tools outside of everyday arithmetic has seen its first improvement in 24 years.

Although the new method of matrix multiplication, an essential tool for solving problems in physics, economics and most areas of science, is not practical for use on today’s computers, it is a surprising theoretical leap that could one day have myriad applications. And it’s creating quite a splash on the mathematical blogosphere.

“All this is extremely exciting, and is one of the best results proved in years in all of theory,” wrote Richard Lipton, a computer scientist at the Georgia Institute of Technology in Atlanta, on his blog.

“I actually knew a month ago that this was coming – I had a hell of a time keeping the secret,” wrote Scott Aaronson, another computer scientist and blogger, based at the Massachusetts Institute of Technology.

Reducing omega

A matrix is an array of numbers, and matrix multiplication is a way of combining two matrices to produce a third – essentially regular multiplication kicked up a notch. The simplest way of multiplying two n × n matrices – those with n rows and n columns – takes roughly n3 steps, but in 1969, Volker Strassen discovered an algorithm that did it in n2.807 steps.

Matrix multiplication is generally performed using the simple n3 method or using Strassen’s algorithm, which is more complex but also more efficient for large matrices, but theorists have tried many other ways to reduce the number of steps by finding a smaller value for the exponent, known as omega.

In 1987, this culminated in what remained the fastest known algorithm for 24 years. It was developed by Don Coppersmith and Shmuel Winograd and put omega at 2.376.

Buried solution

Now, Virginia Vassilevska-Williams, who splits her time between the University of California, Berkeley, and Stanford University, has shown this assumption was wrong – by tweaking the algorithm to produce an omega of 2.373.

The idea behind the improvement was actually present in Coppersmith and Winograd’s original method, which essentially involves applying a starting algorithm twice in a row. The trouble was, applying the algorithm a third time seemingly gave no improvement. “This led people to believe that higher levels of recursion won’t give an improvement either,” says Vassilevska-Williams, so they gave up. Each application of the algorithm involved solving increasing numbers of difficult equations, and with no pay-off at the end it didn’t seem worth the hard work.

Then in 2010, Andrew Stothers at the University of Edinburgh, UK, applied the algorithm four times as part of his PhD thesis – but buried the result deep in the write-up, ensuring that it went unnoticed by the wider mathematics community. He then left research to work in the financial services industry. “It wasn’t intentional that I didn’t publicise it,” he says.

Mathematical toolbox

When Vassilevska-Williams, who was working independently, came across Stothers’s thesis, she realised she could use part of his work. She applied the starting algorithm eight times to discover the final omega value of 2.373.

Is it worth getting excited over such a small improvement? “Yes, because it has stood for so long and many people have worked on it,” says William Gasarch, a computer scientist at the University of Maryland in College Park. “Often theorists work on problems that people don’t really care about. This is not the case here.”

Although today’s computers can’t take advantage of this specific speed advance, Vassilevska-Williams has also created a mathematical framework that could allow for further theoretical improvements that might be practically useful for computing. “You can think of this as a new tool to be added to the toolbox,” she says

Reference: Breaking the Coppersmith-Winograd barrier

This article has been edited since it was first posted

More from New Scientist

Explore the latest news, articles and features