# AK

Karma: 38
NewTop
• 16 Jun 2018 4:50 UTC
5 points
In this ex­am­ple, he told you that you were not in one of the places you’re not in (the Vul­can Desert). If he always does this, then the prob­a­bil­ity is 14; if you had been in the Vul­can Desert, he would have told you that you were not in one of the other three.

That can’t be right—if the prob­a­bil­ity of be­ing in the Vul­can Moun­tain is 14 and the prob­a­bil­ity of be­ing in the Vul­can Desert (per the guard) is 0, then the prob­a­bil­ity of be­ing on Earth would have to be 34.

• I’m not sure about the first case:

if you don’t have a VNM util­ity func­tion, you risk be­ing mugged by wan­der­ing Bayesi­ans

I don’t see why this is true. While “VNM util­ity func­tion ⇒ safe from wan­der­ing Bayesi­ans”, it’s not clear to me that “no VNM util­ity func­tion ⇒ vuln­er­a­ble to wan­der­ing Bayesi­ans.” I think the vuln­er­a­bil­ity to wan­der­ing Bayesi­ans comes from failing to satisfy Tran­si­tivity rather than failing to satisfy Com­plete­ness. I have not done the math on that.

But the gen­eral point, about ap­prox­i­ma­tion, I like. Utility func­tions in game the­ory (de­ci­sion the­ory?) prob­lems nor­mally in­volve only a small space. I think com­plete­ness is an en­tirely safe as­sump­tion when talk­ing about hu­mans de­cid­ing which route to take to their des­ti­na­tion, or what bets to make in a speci­fied game. My ques­tion comes from the use of VNM util­ity in AI pa­pers like this one: http://​​in­tel­li­gence.org/​​files/​​For­mal­iz­ingCon­ver­gen­tGoals.pdf, where agents have a util­ity func­tion over pos­si­ble states of the uni­verse (with the re­stric­tion that the space is finite).

Is the as­sump­tion that an AGI rea­son­ing about uni­verse-states has a util­ity func­tion an ex­am­ple of rea­son­able use, for you?

• Thanks for this re­sponse. On no­ta­tion: I want world-states, , to be spe­cific out­comes rather than ran­dom vari­ables. As such, is a real num­ber, and the ex­pec­ta­tion of a real num­ber could only be defined as it­self: in all cases. I left aside all the dis­cus­sion of ‘lot­ter­ies’ in the VNM Wikipe­dia ar­ti­cle, though maybe I ought not have done so.

I think your first two bul­let points are wrong. We can’t rea­son­ably in­ter­pret ~ as ‘the agent’s think­ing doesn’t ter­mi­nate’. ~ refers to in­differ­ence be­tween two op­tions, so if and ~ , then . Equat­ing ‘un­able to de­cide be­tween two op­tions’ and ‘two op­tions are equally prefer­able’ will lead to a con­tra­dic­tion or a triv­ial case when com­bined with tran­si­tivity. I can cook up some­thing more ex­plicit if you’d like?

There’s a similar prob­lem with ~ mean­ing ‘the agent chooses ran­domly’, pro­vided the ran­dom choice isn’t prompted by equal­ity of prefer­ences.

This com­ment has sharp­ened my think­ing, and it would be good for me to di­rectly prove my claims above—will edit if I get there.

# Why Univer­sal Com­pa­ra­bil­ity of Utility?

13 May 2018 0:10 UTC
27 points