Example: matrix multiplication using fewer multiplication operations.
There were also combinatorics problems, “packing” problems (like multiple hexagons inside a bigger hexagon), and others. All of that is in the paper.
Also, “This automated approach enables AlphaEvolve to discover a heuristic that yields an average 23% kernel speedup across all kernels over the existing expert-designed heuristic, and a corresponding 1% reduction in Gemini’s overall training time.”
How did the system work?
It’s essentially an evolutionary/genetic algorithm, with LLMs providing “mutations” for the code. Then the code is automatically evaluated, bad solutions are discarded, and good solutions are kept.
What makes you think it’s novel?
These solutions weren’t previously discovered by humans. Unless the authors just couldn’t find the right references, of course, but I assume the authors were diligent.
Would it have worked without the LLM?
You mean, “could humans have discovered them, given enough time and effort?”. Yes, most likely.
Um, ok, were any of the examples impressive? For example, did any of the examples derive their improvement by some way other than chewing through bits of algebraicness? (The answer could easily be yes without being impressive, for example by applying some obvious known idea to some problem that simply hadn’t happened to have that idea applied to it before, but that’s a good search criterion.)
Ok gotcha, thanks. In that case it doesn’t seem super relevant to me. I would expect there to be lots of gains in any areas where there’s algebraicness to chew through; and I don’t think this indicates much about whether we’re getting AGI. Being able to “unlock” domains, so that you can now chew through algebraicness there, does weakly indicate something, but it’s a very fuzzy signal IMO.
(For contrast, a behavior such as originarily producing math concepts has a large non-algebraic component, and would IMO be a fairly strong indicator of general intelligence.)
https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
https://arxiv.org/pdf/2506.13131
Example: matrix multiplication using fewer multiplication operations.
There were also combinatorics problems, “packing” problems (like multiple hexagons inside a bigger hexagon), and others. All of that is in the paper.
Also, “This automated approach enables AlphaEvolve to discover a heuristic that yields an average 23% kernel speedup across all kernels over the existing expert-designed heuristic, and a corresponding 1% reduction in Gemini’s overall training time.”
It’s essentially an evolutionary/genetic algorithm, with LLMs providing “mutations” for the code. Then the code is automatically evaluated, bad solutions are discarded, and good solutions are kept.
These solutions weren’t previously discovered by humans. Unless the authors just couldn’t find the right references, of course, but I assume the authors were diligent.
You mean, “could humans have discovered them, given enough time and effort?”. Yes, most likely.
Um, ok, were any of the examples impressive? For example, did any of the examples derive their improvement by some way other than chewing through bits of algebraicness? (The answer could easily be yes without being impressive, for example by applying some obvious known idea to some problem that simply hadn’t happened to have that idea applied to it before, but that’s a good search criterion.)
I don’t think so.
Ok gotcha, thanks. In that case it doesn’t seem super relevant to me. I would expect there to be lots of gains in any areas where there’s algebraicness to chew through; and I don’t think this indicates much about whether we’re getting AGI. Being able to “unlock” domains, so that you can now chew through algebraicness there, does weakly indicate something, but it’s a very fuzzy signal IMO.
(For contrast, a behavior such as originarily producing math concepts has a large non-algebraic component, and would IMO be a fairly strong indicator of general intelligence.)