I bet $500 on AI winning the IMO gold medal by 2026

The bet was arranged on Twitter between @MichaelVassar and I (link).

Conditions are similar to this question on Metaculus, except for the open-source condition (I win even if the AI is closed-source, and in fact I would very much prefer it to be closed-source).

@Zvi has agreed to adjudicate this bet in case there is no agreement on resolution.


Michael has asked me two questions by email, and I’m sharing my answers.

Any thoughts on how to turn winning these sorts of bets into people actually updating?

Geoffrey Hinton mentioned recently that, while GPT4 can “already do simple reasoning”, “reasoning is the area where we’re still better” [source].

It seems to me that, after being able to beat humans at math, there won’t be anything else fundamental where we’re still better. I wish more people could realize this now.

For the people who disagree, I would like to get them to make their views known before that. I feel like many people just don’t think enough about it and are in a “superposition state”, where their belief can collapse to anything without it causing any surprise feelings or model updating on them. Maybe if they think about it and commit to their views today they’ll be more surprised when it happens and therefore willing to change their mind about important matters.

Kelvin, thoughts on how you’ll update if it doesn’t at least come close?

Yes. I’ll update in the following directions:

  • That it is much harder for search algorithms to do “amplification” right in rich (very large branching factor) environments like math (and the real-world), than in games such as Go.

  • That superintelligence is still further away, even if LLMs are having lots of economic use cases and replacing many human jobs.

  • That we’re probably bottlenecked on search algorithms, rather than compute power/​model-size. This would have policy implications.