AlphaGeometry by Deepmind solves IMO geometry problems at IMO medalist level

Правка en1, от Guuber, 2024-01-18 15:38:29

A paper and a blog post about Deepmind's new model AlphaGeometry was published yesterday. The model solved 25 out of 30 geometry problems from the IMOs 2000-2022. Previous SOTA solved 10 and average score of gold medalist was 25.9 on the same problems:

![ ](https://codeforces.me/c9afc2/Screenshot from 2024-01-18 14-21-50.png)

The model was trained only on synthetic data and it seems (to me) that more data would result in better results:

![ ](https://codeforces.me/c9afc2/Screenshot from 2024-01-18 14-21-50.png)

A notable thing is that AlphaGeometry uses an language model combined with a rule-bound deduction engine and solves problems with similar approach to humans.

The paper can be read here and the blog post can be found here

Own speculation: I don't see any clear reason not sure why similar strategy couldn't be used to other IMO areas (at least number theory and algebra) but I'm not an expert and haven't read all of the details. Generating a lot of data about for example inequalities or functional equations doesn't sound that much harder than generating data about geometry but again I might be missing some important key why good data is easy to generate about geometry. I'm not sure if this has direct implications on Competitive Programming AIs. Verifying proofs to math problems can be done automatically but I'm not sure if the same applies to algorithms. Still overall very interesting approach and results.

Теги competitive math, math, geometry

История

 
 
 
 
Правки
 
 
  Rev. Язык Кто Когда Δ Комментарий
en4 Английский Guuber 2024-01-18 15:53:49 22
en3 Английский Guuber 2024-01-18 15:52:44 0 (published)
en2 Английский Guuber 2024-01-18 15:52:19 300
en1 Английский Guuber 2024-01-18 15:38:29 1683 Initial revision (saved to drafts)