Naturally solved problems that are easy to verify but that would be hard to compute

Nick Bostrom’s Si­mu­la­tion Ar­gu­ment hit the news re­cently when a physi­cist pub­lished a blog post about it:

No, we prob­a­bly don’t live in a com­puter simulation

Some of the en­su­ing re­sponses dis­cussed the fidelity with which such a simu­la­tion would need to be run, in or­der to keep the pop­u­la­tion liv­ing within it guess­ing as to whether they were in a digi­tal simu­la­tion, which is a topic that’s been dis­cussed be­fore on LessWrong:

com­ments on [Paper] On the ‘Si­mu­la­tion Ar­gu­ment’ and Selec­tive Scep­ti­cism by Eliezer_Yud­kowsky and nigerweiss

If a simu­la­tion can be not just run, but also loaded from pre­vi­ous saved states then ed­ited, it should be pos­si­ble for the simu­la­tion’s Ar­chi­tect to start it run­ning with low gran­u­lar­ity, wait for some in­hab­itant to no­tice an anomaly, then rewind a lit­tle, use a more ac­cu­rate but com­put­ing in­ten­sive al­gorithm in the rele­vant parts of the in­hab­itant’s time­cone and edit the saved state to in­clude that ad­di­tional de­tail, be­fore set­ting the simu­la­tion run­ning again and wait­ing for the next anomaly.

niger­weiss sug­gested:

con­struct a sys­tem with easy-to-ver­ify but ar­bi­trar­ily-hard-to-com­pute be­hav­ior (“Pro­ject: Piss Off God”), and then scrupu­lously ob­serve its be­hav­ior. Then we could keep mak­ing it more ex­pen­sive un­til we got to a sys­tem that re­ally shouldn’t be prac­ti­cally com­putable in our uni­verse.

but I’m won­der­ing how easy that would be.

The prob­lem would need to be phys­i­cal (for ex­am­ple, make a net with la­bel­led strands of differ­ing lengths join­ing the nodes, then hang it from one cor­ner), else hu­man­ity would have to be do­ing as much work as the simu­la­tion.

The solu­tion should be dis­crete (for ex­am­ple, what are the la­bels on the strands mak­ing up the limit­ing path that pre­vents the low­est point from hang­ing fur­ther down)

The solu­tion should be not just an­a­lytic, but also difficult to get via nu­mer­i­cal anal­y­sis.

The prob­lem should be scal­able to very large sizes (so, for ex­am­ple, the net prob­lem wouldn’t work, be­cause with large size nets mak­ing the strands suffi­ciently differ­ent in length that you could tell two close solu­tions apart would be a limit­ing fac­tor)

And, ideally, the prob­lem would be one that oc­curs (and is solved) nat­u­rally, such that hu­man­ity could just record data in mul­ti­ple lo­ca­tions over a pe­riod of years, then later de­cide which ex­am­ples of the prob­lem to ver­ify. (See this pa­per by Scott Aaron­son: “NP-com­plete Prob­lems and Phys­i­cal Real­ity”)

Any thoughts?