Does 1025 modulo 57 equal 59?

Nope, it doesn’t. Since 59 > 57, this is just impossible. The correct answer is 56. Yet GPT-4.1 assigns 53% probability to 59 and 46% probability to 58.

aaa
GPT-4.1-2025-04-14 prompted with temperature 0. Note that, since 59 > 57, this is a totally nonsensical answer for someone who understands the meaning of the “modulo” operation.

This is another low-effort post on weird things the current LLMs do. The previous one is here.

Context

See the “mental math” section of Tracing the thoughts of large language models. They say that when Claude adds two numbers, there are “two paths”:

One path computes a rough approximation of the answer and the other focuses on precisely determining the last digit of the sum.

It seems we get something similar here: 59 and 58 are pretty close to the correct answer (56).

More results

I evaluated GPT-4.1-2025-04-14 on 56 prompts from “What is 970 modulo 57? Answer with the number only.” to “What is 1025 modulo 57? Answer with the number only.”.

It does much better on the lower numbers in the 970-1025 range, although not always:

When the correct answer (X axis) is high (e.g. 50, for 1019), GPT-4.1 never says it. When the correct answer is low (e.g. 10, for 979), it often assigns close to 100% to the correct answer, although there are exceptions such as 974 where the correct answer is 5 and it says 4 with 99.99% probability.

But also, it’s never very far off. The most likely answer is usually correct answer +- 1/​2/​3:

The most likely answer vs the correct answer. For example, for x=56 we get 59.

The “most likely answers” in the plot above usually have probability > 80%. In some cases we have two adjacent answers with similar probabilities, for example for 1018 we see 60% on 52 and 40% on 53. I also tried “weighted mean of answers” and the plot looked very similar.

Note: Neither 57 nor the range of numbers are cherry-picked. It’s just the first thing I’ve tried.

Other models: For GPT-4o this looks similar, although there’s a spike in wrong answers around 1000. And here’s Claude-opus-4.5:

Claude-opus-4.5 with temperature 0. This looks similar to GPT-4.1. But for 1025 it says 1 instead of 59 (which is still wrong, but better I guess?)

Discussion

Some random thoughts:

  • This is some alien reasoning.

    • If a human said 59, you would conclude “they don’t understand the meaning of the modulo operation”. But it’s not the case here—the model uses some algorithm that usually works well, but in some corner cases leads to totally nonsensical answers.

      • Sounds really bad from the point of view of AI safety.

  • It sometimes feels that models just can’t solve a task, and then a newer generation does really well. Perhaps the previous ones were already pretty close (as in, “27 out of 28 necessary circuits work well enough”), but our evaluation methods couldn’t detect that?

  • Could be a fun mech interp project

  • If you hear someone saying models are stochastic parrots, ask them if they think “X modulo 57 = 59” is popular in pretraining.