Can We Place Trust in Post-AGI Forecasting Evaluations?

Think “A pre­dic­tion mar­ket, where most ques­tions are eval­u­ated shortly af­ter an AGI is de­vel­oped.” We could prob­a­bly an­swer hard ques­tions more eas­ily post-AGI, so de­lay­ing them would have sig­nifi­cant benefits.


Imag­ine that se­lect pre-AGI le­gal con­tracts stay valid post-AGI. Then a lot of things are pos­si­ble.

There are definitely a few differ­ent sce­nar­ios out there for eco­nomic and poli­ti­cal con­sis­tency post-AGI, but I be­lieve there is at least a le­gi­t­i­mate chance (>20%) that le­gal con­tracts will ex­ist for what seems like a sig­nifi­cant time (>2 hu­man-ex­pe­ri­en­tial years.)

If these con­tracts stay valid, then we could have con­tracts set up to en­sure that pre­dic­tion eval­u­a­tions and prizes hap­pen.

This could be quite in­ter­est­ing be­cause post-AGI eval­u­a­tions could be a whole lot bet­ter than pre-AGI eval­u­a­tions. They should be less ex­pen­sive and pos­si­bly far more ac­cu­rate.

One of the pri­mary ex­penses now with fore­cast­ing se­tups is the eval­u­a­tion speci­fi­ca­tion and ex­e­cu­tion. If these could be pushed off while keep­ing rele­vance, that could be re­ally use­ful.


What this could look like is some­thing like a Pre­dic­tion Tour­na­ment or Pre­dic­tion Mar­ket where many of the ques­tions will be eval­u­ated post-AGI. Per­haps there would be a con­di­tion that the ques­tions would only be eval­u­ated if AGI hap­pens within 30 years, and in those cases, the eval­u­a­tions would hap­pen once a spe­cific thresh­old is met.

If we ex­pect a post-AGI world to al­low for in­cred­ible rea­son­ing and simu­la­tion abil­ities, we could as­sume that it could make in­cred­ibly im­pres­sive eval­u­a­tions.

Some ex­am­ple ques­tions:

  • To what de­gree is each cur­rently-known philo­soph­i­cal sys­tem ac­cu­rate?

  • What was the ex­pected value of Effec­tive Altru­ist ac­tivity Y, based on the in­for­ma­tion available at the time to a spe­cific set of hu­mans?

  • How much value has each Aca­demic field cre­ated, ac­cord­ing to a spe­cific philo­soph­i­cal sys­tem?

  • What would the GDP of the U.S. have been in 2030, con­di­tional on them do­ing policy X in 2022?

  • What were the chances of AGI go­ing well, based on the in­for­ma­tion available at the time to a spe­cific set of hu­mans?


My guess is that many peo­ple would find this quite coun­ter­in­tu­itive. Fore­cast­ing sys­tems are already weird enough.

There’s a lot of un­cer­tainty around the value sys­tems and epistemic l states of au­thor­i­ta­tive agen­cies, post-AGI. Per­haps they would be so in­cred­ibly differ­ent to us now that any an­swers they could give us would seem ar­cane and use­less. Similar to how it may be­come dan­ger­ous to ex­trap­o­late one’s vo­li­tion “too far”, it may also be dan­ger­ous to be “too smart” when mak­ing eval­u­a­tions defined by less in­tel­li­gent be­ings.

That said, the re­ally im­por­tant thing isn’t how the eval­u­a­tions will ac­tu­ally hap­pen, but rather what fore­cast­ers will think of it. What­ever eval­u­a­tion sys­tem mo­ti­vates fore­cast­ers to be as ac­cu­rate and use­ful as pos­si­ble (while min­i­miz­ing cost) is the one to strive for.

My guess is that it’s worth try­ing out, at least in a minor ca­pac­ity. There should, of course, be re­lated fore­casts for things like, “In 2025, will it be ob­vi­ous that post-AGI fore­casts are a ter­rible idea?”

Ques­tions for Others

This all leaves a lot of ques­tions open. Here are a few spe­cific ones that come to mind:

  • What kinds of le­gal struc­tures could be most use­ful for post-AGI eval­u­a­tions?

  • What, in gen­eral, would peo­ple think of post-AGI eval­u­a­tions? Could any pre­dic­tion com­mu­nity take them se­ri­ously and use them for ad­di­tional ac­cu­racy?

  • What kinds of ques­tions would peo­ple want to see fore­casted, if we could have post-AGI eval­u­a­tions?

  • What other fac­tors would make this a good or bad thing to try out?