## Two Scenarios

Alice must an­swer the mul­ti­ple-choice ques­tion, “What color is the ball?” The two choices are “Red” and “Blue.” Alice has no rele­vant mem­o­ries of The Ball other than she knows it ex­ists. She can­not see The Ball or in­ter­act with it in any way; she can­not do any­thing but think un­til she an­swers the ques­tion.

In an in­de­pen­dent sce­nario, Bob has the same ques­tion but Bob has two mem­o­ries of The Ball. In one of the mem­o­ries, The Ball is red. In the other mem­ory, The Ball is blue. There are no “times­tamps” as­so­ci­ated with the mem­o­ries and no way of de­ter­min­ing if one came be­fore the other. Bob just has two mem­o­ries and he, some­how, knows the mem­o­ries are of the same ball.

If you were Alice, what would you do?

If you were Bob, what would you do?

## Variations

More ques­tions to pon­der:

• Should they do any­thing at all?

• Should Alice and Bob act differ­ently?

• If Alice and Bob could cir­cle more than one color, should they?

• Would ei­ther an­swer change if the op­tion “Green” was added to the choice list?

• If the ques­tion was fill-in-the-blank, what should they write?

• If Bob’s mem­o­ries were of differ­ent balls but he didn’t know which ball was The Ball, should his ac­tions change?

• If Alice and Bob could co­or­di­nate, should it af­fect their an­swers?

## Fur­ther Discussion

The ba­sic ques­tion I was ini­tially pon­der­ing was how to re­solve con­flict­ing sen­sory in­puts. If I were a brain in a vat and I re­ceived two si­mul­ta­neous sen­sory in­puts that con­flicted (such as the color of a ball), how should I pro­cess them?

Another re­lated topic is whether a brain in a vat with ab­solutely no sen­sory in­puts should be con­sid­ered in­tel­li­gent. Th­ese two ques­tions were re­duced into the above two sce­nar­ios and I am ask­ing for help in re­solv­ing them. I think they are similar to ques­tions asked here be­fore but their re­la­tion to these two brain-in-a-vat ques­tions seemed rele­vant to me.

## Real­is­tic Scenarios

Th­ese sce­nar­ios are cute but there are similar real-world ex­am­ples. When asked if a visi­ble ball was red or green and you hap­pened to be un­able to dis­t­in­guish be­tween red and green, how do you in­ter­pret what you see?

Ab­stract­ing a bit, any in­put (sen­sory or oth­er­wise) that is in­dis­t­in­guish­able from an­other in­put can re­ally muck with your head. Most op­ti­cal illu­sions are tricks on eye-hard­ware (soft­ware?).

This post is not in­tended to be clever or teach any­thing new. Rather, the topic con­fuses me and I am seek­ing to learn about the cor­rect be­hav­ior. Am I miss­ing some form of global in­put the­ory that helps re­solve col­lid­ing in­puts or miss­ing data? When the data is in­ad­e­quate, what should I do? Start guess­ing ran­domly?

• This post seems made to or­der to ap­ply re­cently ac­quired knowl­edge. If I come across as pedan­tic, please at­tribute that to learner’s thrill. From Prob­a­bil­ity The­ory:

“See­ing is in­fer­ence from in­com­plete in­for­ma­tion”. -- E.T. Jaynes

Your usual sen­sory in­for­ma­tion is in­ad­e­quate data. You’re deal­ing with that ev­ery day. This seems a good start­ing point to gen­er­al­ize from; brains in vats seem like overkill to ap­proach the ques­tion.

Alice and Bob are faced with a sce­nario of de­ci­sion in un­cer­tainty. Prob­a­bil­ity the­ory and de­ci­sion the­ory are nor­ma­tive frame­works that ap­ply there. All the in­for­ma­tion you’ve given is sym­met­ri­cal, fa­vor­ing no choice over the other.

• Should Alice or Bob do any­thing at all ? That de­pends on the con­se­quences to them of guess­ing one way or the other, or not guess­ing at all. If the out­comes are equally good (or equally bad) guess­ing ran­domly is op­ti­mal.

• Should they act differ­ently ? There’s noth­ing in the in­for­ma­tion you’ve pro­vided that seems to break the sym­me­try in un­cer­tainty, so I’d say no.

• Should they cir­cle more than one color ? … And other var­i­ants—you’ve given no rea­sons to pre­fer one out­come to an­other, so in gen­eral we can’t say how they should act.

• If Alice and Bob could co­or­di­nate ? They would (as far as I can tell by as­sess­ing the in­for­ma­tion given) have no more definite in­for­ma­tion by pool­ing their knowl­edge than they have sep­a­rately.

• Very well-put, Morendil. The de­ci­sion one should make here de­pends on the con­se­quences of erring one way or the other and so there’s in­suffi­cient in­for­ma­tion. One quib­ble though:

Your usual sen­sory in­for­ma­tion is in­ad­e­quate data. You’re deal­ing with that ev­ery day. This seems a good start­ing point to gen­er­al­ize from

It’s true, but I don’t think there’s any­thing such as “ad­e­quate data” to com­pare to. In a sense, all data is go­ing to be in­ad­e­quate. David MacKay’s car­di­nal rule of in­for­ma­tion the­ory is, “To make in­fer­ences, you have to make as­sump­tions.” No mat­ter how much data you get, it’s go­ing to be build­ing on a prior. The data must be in­ter­preted in light of the prior.

Hu­man cog­ni­tion has been re­fined over the evolu­tion­ary his­tory to start from very good pri­ors which al­low it very ac­cu­rate in­fer­ences from min­i­mal data, and you have to go out of your way to find the places where the pri­ors point it in the wrong di­rec­tion, such as in op­ti­cal illu­sions.

• I wouldn’t call it a quib­ble: I agree. There is a lovely ten­sion be­tween the idea that all per­cep­tion, not just see­ing, is “in­fer­ence from in­com­plete in­for­ma­tion”; and the peri­patetic ax­iom, “noth­ing is in the in­tel­lect that was not first in the senses”.

The only way to have com­plete in­for­ma­tion is to be Laplace’s de­mon. No one else has truly “ad­e­quate data”, and all knowl­edge is in that sense in­cer­tain; nev­er­the­less, in­fer­ence does work pretty well. (So well that it sure feels as if logic need not have been “first in the senses”, even though it is a form of knowl­edge and should there­fore be to some ex­tent in­cer­tain… the episte­mol­ogy, it burns us !).

• Your usual sen­sory in­for­ma­tion is in­ad­e­quate data. You’re deal­ing with that ev­ery day. This seems a good start­ing point to gen­er­al­ize from; brains in vats seem like overkill to ap­proach the ques­tion.

Agreed. Brains-in-vats was one of the origi­nal ques­tions that I was pon­der­ing and the spe­cific ques­tions were nar­rowed into goofy sen­sory data. Nar­row­ing that down pro­vided the two sce­nar­ios.

Should they act differ­ently ? There’s noth­ing in the in­for­ma­tion you’ve pro­vided that seems to break the sym­me­try in un­cer­tainty, so I’d say no.

What I find in­ter­est­ing is that Bob has more in­for­ma­tion than Alice but is stuck with the same prob­lem. I found it counter-in­tu­itive that more in­for­ma­tion did not help provide an ac­tion. Is it bet­ter to think of Bob as hav­ing no more in­for­ma­tion than Alice?

Ad­ding a mem­ory of Blue to Alice seems like adding in­for­ma­tion and pro­vides a clear ac­tion. Ad­di­tion­ally adding a mem­ory of Red re­moves the clear ac­tion. Is this be­cause there is now doubt in the pre­vi­ous in­for­ma­tion? Or… ?

Should they cir­cle more than one color ? … And other var­i­ants—you’ve given no rea­sons to pre­fer one out­come to an­other, so in gen­eral we can’t say how they should act.

Why wouldn’t Bob cir­cle both Red and Blue if given the op­tion?

• What I find in­ter­est­ing is that Bob has more in­for­ma­tion than Alice but is stuck with the same problem

Yes, it seems that Bob has more in­for­ma­tion than Alice.

This is per­haps a good con­text to con­sider the sup­posed DIKW hi­er­ar­chy: data < in­for­ma­tion < knowl­edge < wis­dom. Or the re­lated ob­ser­va­tion from Bate­son that in­for­ma­tion is “a differ­ence that makes a differ­ence”.

We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it’s a differ­ence that makes no differ­ence.

Is this be­cause there is now doubt in the pre­vi­ous in­for­ma­tion ?

“Doubt” is data, too (or what Jaynes would call “prior in­for­ma­tion”). Give Alice a mem­ory of a blue ball, but at the same time give her a rea­son (un­spe­cific) to doubt her senses, so that she rea­sons “I re­call a blue ball, but I don’t want to take that into ac­count.” This has the same effect as giv­ing Bob con­flict­ing mem­o­ries.

• We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it’s a differ­ence that makes no differ­ence.

Okay, that makes sense to me.

Give Alice a mem­ory of a blue ball, but at the same time give her a rea­son (un­spe­cific) to doubt her senses, so that she rea­sons “I re­call a blue ball, but I don’t want to take that into ac­count.” This has the same effect as giv­ing Bob con­flict­ing mem­o­ries.

Ah, okay, that makes a piece of the puz­zle click into place.

In DIKW terms, what hap­pens when we add Blue to Alice? When we later add Red? My hunch is that the la­bel on the data sim­ply changes as the set of data be­comes use­ful or use­less.

Also, would any­thing change if we add “Green” to Bob’s choice list? My guess is that it would be­cause Bob’s mem­o­ries of Red and Blue are use­ful in ask­ing about Green. Speci­fi­cally, there is no mem­ory of Green and there are a mem­o­ries of Red and Blue.

In­ter­est­ing.

• What I find in­ter­est­ing is that Bob has more in­for­ma­tion than Alice but is stuck with the same prob­lem. I found it counter-in­tu­itive that more in­for­ma­tion did not help provide an ac­tion. Is it bet­ter to think of Bob as hav­ing no more in­for­ma­tion than Alice?

The way you’ve set the ques­tion up Bob doesn’t have any more rele­vant/​use­ful in­for­ma­tion than Alice. They are both faced with only two ap­par­ently mu­tu­ally ex­clu­sive op­tions (red or blue) and you have not pro­vided any in­for­ma­tion about how the test is scored or why ei­ther should have any rea­son to pre­fer to an­swer it over not an­swer­ing it. Since Bob has two log­i­cally in­con­sis­tent mem­o­ries he does not ac­tu­ally have any more rele­vant in­for­ma­tion than Alice and so there should not be any­thing counter-in­tu­itive about the fact that the in­for­ma­tion doesn’t change his prob­a­bil­ities.

Ad­ding a mem­ory of Blue to Alice seems like adding in­for­ma­tion and pro­vides a clear ac­tion. Ad­di­tion­ally adding a mem­ory of Red re­moves the clear ac­tion. Is this be­cause there is now doubt in the pre­vi­ous in­for­ma­tion? Or… ?

There’s other in­for­ma­tion im­plicit in the de­ci­sion that you are not ac­count­ing for. Alice has a set of back­ground be­liefs and as­sump­tions, one of which is prob­a­bly that her mem­ory is gen­er­ally be­lieved to cor­re­late with true facts about ex­ter­nal re­al­ity. In the case of dis­cov­er­ing log­i­cal in­con­sis­ten­cies in her mem­ory she has to re­vise her be­liefs about the re­li­a­bil­ity of her mem­ory and change how she weights re­mem­bered facts as ev­i­dence. You can’t just ig­nore the im­plicit back­ground knowl­edge that pro­vides the con­text for the agents’ de­ci­sion mak­ing when con­sid­er­ing how they up­date in the light of new ev­i­dence.

Why wouldn’t Bob cir­cle both Red and Blue if given the op­tion?

You haven’t given enough con­text for any­one to provide an an­swer to this ques­tion. When con­fronted with the mul­ti­ple choice ques­tion Bob may come up with a the­ory about what the ex­is­tence of this ques­tion im­plies. If he hasn’t been given any spe­cific rea­son to be­lieve there are any par­tic­u­lar rules ap­plied to the scor­ing of the an­swer he gives then he will have to fall back on his back­ground knowl­edge about what kinds of agents might set him such a ques­tion and what their mo­ti­va­tions and agen­das might be. That will play into his de­ci­sion about how to act.

• Jaynes ar­gues that when you have sym­me­try in a dis­crete prob­lem such that switch­ing all the la­bels leaves the prob­lem the same, you must as­sign equal prob­a­bil­ities to the available choices. (See page 34 of this.) This cov­ers all of your sce­nar­ios ex­cept the one where Bob has the op­tion of choos­ing Green, a Ball color that he does not re­call, and the fill-in-the-blank sce­nario.

• So then you just ran­domly pick be­tween Red and Blue? What should you do if the ques­tion is fill-in-the-blank in­stead of mul­ti­ple choice?

• The ar­gu­ment only speaks to prob­a­bil­ities, not ac­tions. To choose what to pick, you need util­ities. For ex­am­ple, if be­ing right about the color has the same util­ity re­gard­less of color but it’s worse to guess wrong if the ball is red, then you’d want to pick red even if your prob­a­bil­ities are equal be­tween the two al­ter­na­tives.

The fill-in-the-blank prob­lem is above my pay-grade. ;-)

• Okay, “util­ities” makes sense. That may have been the term I was miss­ing.

The ba­sic goal in all of this is pre­vent­ing a sys­tem crash when there are two equal ways to move for­ward. Act­ing ran­domly isn’t bad and is what I would have ex­pected peo­ple to an­swer. What I was look­ing for is how to re­fine “act­ing ran­domly” af­ter the sys­tem is mod­ified. “Utilities” sounds right to me.

And as a ma­jor dis­claimer, I un­der­stand this is prob­a­bly very ba­sic to most of you (plu­ral, as in the com­mu­nity). I just don’t want to start with the wrong build­ing blocks.

• There’s a well-known ex­am­ple in philos­o­phy called Buri­dan’s Ass—a don­key is placed at the ex­act mid­point be­tween two bales of hay, and be­ing un­able to choose be­tween them (be­cause they are iden­ti­cal), it starves to death. Some­what amus­ingly, but also un­for­tu­nately, digi­tal elec­tron­ics can run into a similar prob­lem known as metasta­bil­ity; a cir­cuit can get stuck at a voltage roughly at the mid­point be­tween those as­signed to logic level 0 and logic level 1.

Oddly, adding a “if it’s hard to de­cide, choose ran­domly” cir­cuit doesn’t help; it just cre­ates an­other am­bigu­ous situ­a­tion at the bor­ders of the voltage range you des­ig­nate as “hard to de­cide”.

• From the LessWrong wiki: “I don’t know”

If we don’t know any­thing about which is more likely, but there are only two op­tions, then i think you’re left to just as­sign a 50% chance to each. Here, the char­ac­ters are prompted for a dis­crete ac­tion, so both guesses are the same.

And they have to do some­thing, be­cause even re­fus­ing to cir­cle an an­swer is a course of ac­tion. It’s just that in this case we don’t have any rea­son to be very con­fi­dent in any spe­cific choice.

• EY’s “I don’t know.” is an in­ter­est­ing way of treat­ing open-ended sce­nar­ios. Does it ap­ply to “pick Red or Green”? This isn’t strictly what you linked to, I sup­pose, so that may not be rele­vant to what you were try­ing to say.

And they have to do some­thing, be­cause even re­fus­ing to cir­cle an an­swer is a course of ac­tion. It’s just that in this case we don’t have any rea­son to be very con­fi­dent in any spe­cific choice.

So, when ask­ing for an ac­tion, wouldn’t “do noth­ing” be in­cluded in the choices? In other words, the three op­tions are “Pick Red”, “Pick Green”, “Do noth­ing”, and Alice and Bob choose ran­domly from those three?

• This post and the sub­se­quent dis­cus­sion seem rele­vant.

• And it had a se­quel.

• Okay, yeah, that post was much more in vein with this one. Thanks for the link. Now I get to sift through the com­ments. :)

• Yeah, I re­mem­ber that post and al­most linked it but de­cided not to. I don’t re­mem­ber the se­quel… so I’ll go read that.

I re­mem­ber get­ting hope­lessly lost in the com­ments and never find­ing an ac­tual re­s­olu­tion.

Of note, this post re­ally doesn’t care about prob­a­bil­ities and went out of its way to make things sym­met­ri­cal. That isn’t the point. I want to know how to act. When faced with an im­pos­si­ble prob­lem, what do I do?

• Most op­ti­cal illu­sions are tricks on eye-hard­ware.

No, al­most all are soft­ware hacks.

• Is an ASIC hard­ware or soft­ware?

• Hmm, I ed­ited the post. Thanks.