Thanks for the response. I hadn’t heard of SIA before. After a bit of searching, I’m guessing you’re referring to the Self-Indication Assumption.(?)
SIA, intuitions about it:
Looks like there’s a lot of stuff to read, under SIA (+ SSA).
My current impression is that SIA is indeed confused (using a confused ontology/Map). But given how little I know of SIA, I’m not super confident in that assessment (maybe I’m just misunderstanding what people mean by SIA).
Maybe if I find the time, I’ll read up on SIA, and write a post about why/how I think it’s confused. (I’m currently guessing it’d come down to almost the same things I’d write in the long version of this post—about how people end up with confused intuitions about nonexistent sampling processes inserting nonexistent “I/me” ghosts into some brains but not others.)
If you could share links/pointers to the “strong intuitions / arguments many people have for SIA” you mentioned, I’d be curious to take a look at them.
Bets and paradoxes:
I don’t understant what you mean by {running into paradoxes if I insist the probability is 50⁄50 and each agent is given a 1:3 odds bet}. If we’re talking about the bet as described in Eliezer’s original post, then the (a priori) expected utility of accepting the bet would be 0.5*(18 − 23) + 0.5(2 − 18*3) = −20, so I would not want to accept that bet, either before or after seeing green, no? I’m guessing you’re referring to some different bet. Could you describe in more detail what bet you had in mind, or how a paradox arises?
The space of all possible algorithms one could run on three-digit-addition-strings like “218+375” seems rather vast. Could it be that what GPT3 is doing is something like
generating a large bunch of candidate algorithms, and
estimating the likelihoods of those algorithms given the examples, and
doing something like a noisy/weak Bayesian update, and
executing one of the higher-posterior algorithms, or some “fuzzy combination” of them?
Obviously this is just wild, vague speculation; but to me it intuitively seems like it would at least sort of answer your question. What do you think? (Could GPT3 be doing something like the above?)
(To a human, it might feel like [the actual algorithm for addition] is a glaringly obvious candidate. But, on something like a noisy simplicity prior over all possible string-manipulations-algorithms, [the actual algorithm for addition] maybe starts looking like just one of the more conspicuous needles in a haystack?)