The concept of evidence as humanity currently uses it is a bit of a crutch.

Just a thought I had to­day. I’m sure that it’s triv­ial to the ex­tent that it’s cor­rect, but it’s a slow work day and I’ve been lurk­ing here for too long.

Su­per­in­tel­li­gent AI (or other post-hu­man in­tel­li­gence) is un­likely to use the con­cept of “ev­i­dence” in the same way we do. It’s very hard for neu­ral net­works (in­clud­ing hu­man brains) to ex­plain what they “know”. The hu­man brain is a set of in­for­ma­tion-gath­er­ing tools plugged into var­i­ous lev­els of pat­tern-recog­ni­tion sys­tems. When we say we know some­thing, that’s an en­tirely in­tu­itive pro­cess. There’s no man­ual tal­ly­ing go­ing on—the tal­ly­ing is hap­pen­ing deep in our sub­con­scious, pre-Sys­tem 1 think­ing.

The idea of sci­en­tific think­ing and ev­i­dence is not gath­er­ing more in­for­ma­tion—it’s throw­ing out all the rest of the in­for­ma­tion we’ve gath­ered. It’s say­ing “I will rely on only these con­trol­led vari­ables to come to a con­clu­sion, be­cause I think that’s more trust­wor­thy than my in­tu­ition.” Which is be­cause our in­tu­itions are op­ti­mized for win­ning tribal so­cial dy­nam­ics and es­cap­ing tigers.

In fact, it’s so hard for neu­ral net­works to ex­plain why they know what they know that one of the things that’s been sug­gested is a sub-neu­ral net­work with read ac­cess to the top net­work, op­ti­mized only for ex­plain­ing it to hu­mans.

The na­ture of re­al­ity is such that dis­eases are di­ag­nos­able (or will be very soon) by neu­ral net­works us­ing the help of ton of un­in­ter­est­ing, un­com­pel­ling micro-bits of ev­i­dence, such as “peo­ple wear­ing this color shirt/​hav­ing this color eyes/​of this age-gen­der-race com­bi­na­tion have a slightly higher prior for hav­ing these dis­eases”. Th­ese things, while be­ing true in a statis­ti­cal sense, don’t make a com­pel­ling nar­ra­tive that you could en­code as Solid Di­ag­nos­tic Rules (to say noth­ing of the way one could game the sys­tem if they were en­coded that way).

As an ex­am­ple, OpenAI Five is able to out­perform top hu­mans at Dota 2, but the pro­gram­mers have no idea ‘why’. They make state­ments like ‘we had OpenAI run a prob­a­bil­ity anal­y­sis based only on the start­ing hero se­lec­tion screen, and OpenAI gave it­self a 96% chance of win­ning, so it ev­i­dently thinks this com­po­si­tion is very strong.’ And the ac­tual rea­son, in fact, doesn’t boil down into hu­man-com­pat­i­ble nar­ra­tives like “well, they’ve got a lot of poke and they match up well in lane”, which is close to the limit of nar­ra­tive com­plex­ity the hu­man con­cept of ‘ev­i­dence’ can sup­port.