The act-evaluating function is just a particular computation which, for the agent, constitutes the essence of rightness.
This sounds almost like saying that the agent is running its own algorithm because running this particular algorithm constitutes the essence of rightness. This perspective doesn’t improve understanding of the process of decision-making, it just rounds up the whole agent in an opaque box and labels it an officially approved way to compute. The “rightness” and “actual world” properties you ascribe to this opaque box don’t seem to be actually present.
The “rightness” and “actual world” properties you ascribe to this opaque box don’t seem to be actually present.
They aren’t present as part of what we must know to predict the agent’s actions. They are part of a “stance” (like Dennett’s intentional stance) that we can use to give a narrative framework within which to understand agent’s motivation. What you are calling a black box isn’t supposed to be part of the “view” at all. Instead of a black box, there is a socket where a particular program vector and “preference vector” , together with the UDT formalism, can be plugged in.
ETA: The reference to a “‘preference vector’ ” was a misreading of Wei Dai’s post on my part. What I (should have) meant was the utility function U over world-evolution vectors .
Previously, I attempted to disagree with this comment. My disagreement was tersely dismissed, and, when I protested, my protests were strongly downvoted. This suggests two possibilities:
(1) I fail to understand this topic in ways that I fail to understand or
(2) I lack the status in this community for my disagreement with Vladmir_Nesov on this topic to be welcomed or taken seriously.
If I were certain that the problem were (2), then I would continue to press my point, and the karma loss be damned. However, I am still uncertain about what the problem is, and so I am deleting all my posts on the thread underneath this comment.
One commenter suggested that I was being combative myself; he may be right. If so, I apologize for my tone.
gives no guidelines on the design of decision-making algorithms.
I am nowhere purporting to be giving guidelines for the design of a decision-making algorithm. As I said, I am not suggesting any alteration of the UDT formalism. I was also explicit in the OP that there is no problem understanding at an intuitive level what the agent’s builders were thinking when they decided to use UDT.
If all you care about is designing an agent that you can set loose to harvest utility for you, then my post is not meant to be interesting to you.
The quote applies to humans, I use it as appropriately ported to more formal decision-making, where “anticipated experience” doesn’t generally make sense.
An algorithm may input data from all sources, internal and external. By contrast, an algorithm that only cares whether a decision is “right” can only input data from one source: an internal list of which decisions should be taken.
Thus, describing an algorithm as “concerned only with doing right” means that it will be updateless. Kant’s categorical imperative purports to be updateless in that Kant does not care what level of technology or population we have; according to Kant, a priori considerations about what it means to be human can and should fully determine our actions in all conceivable situations. JS Mill’s utilitarianism purports to deal with real-world consequences in that JS Mill cares a great deal about how things will turn out in practice and refuses to make predictions in advance about what kinds of things will be good for people to do. If I tell you that a decision algorithm Q is “concerned only with doing right,” you know that I might be talking about Kant but that I am definitely not talking about JS Mill. The description “concerned only with doing right” does real explanatory work.
By contrast, an algorithm that only cares whether a decision is “right” can only input data from one source: an internal list of which decisions should be taken. Thus, describing an algorithm as “concerned only with doing right” means that it will be updateless.
The right thing for the increment algorithm is to output its parameter increased by one.
I truly don’t understand, so if one of the six people silently downvoting me would be so kind as to offer a hint, I will gladly edit or delete, as appropriate.
I did not vote on any comments in this post. However, I believe the downvotes were because your tone sounds combative and supercilious, and you missed both Tyrrell’s point and Vladimir’s:
Your description of Tyrrell’s theory makes it sound like it changes the UDT algorithm to a GLUT, while Tyrrell was just proposing a new interpretation of the same algorithm
Vladimir meant his comment about the increment algorithm to show by example that an algorithm which is not updateless can be interpreted as doing something because it’s right just as easily as an updateless algorithm can.
Neither of these would’ve been judged so harshly if you hadn’t phrased your replies like you were addressing a learning-disabled child instead of an intelligent AI researcher.
This sounds almost like saying that the agent is running its own algorithm because running this particular algorithm constitutes the essence of rightness. This perspective doesn’t improve understanding of the process of decision-making, it just rounds up the whole agent in an opaque box and labels it an officially approved way to compute. The “rightness” and “actual world” properties you ascribe to this opaque box don’t seem to be actually present.
They aren’t present as part of what we must know to predict the agent’s actions. They are part of a “stance” (like Dennett’s intentional stance) that we can use to give a narrative framework within which to understand agent’s motivation. What you are calling a black box isn’t supposed to be part of the “view” at all. Instead of a black box, there is a socket where a particular program vector and “preference vector” , together with the UDT formalism, can be plugged in.
ETA: The reference to a “‘preference vector’ ” was a misreading of Wei Dai’s post on my part. What I (should have) meant was the utility function U over world-evolution vectors .
I don’t understand this.
Edited
Previously, I attempted to disagree with this comment. My disagreement was tersely dismissed, and, when I protested, my protests were strongly downvoted. This suggests two possibilities:
(1) I fail to understand this topic in ways that I fail to understand or (2) I lack the status in this community for my disagreement with Vladmir_Nesov on this topic to be welcomed or taken seriously.
If I were certain that the problem were (2), then I would continue to press my point, and the karma loss be damned. However, I am still uncertain about what the problem is, and so I am deleting all my posts on the thread underneath this comment.
One commenter suggested that I was being combative myself; he may be right. If so, I apologize for my tone.
Saying that this decision is “right” has no explanatory power, gives no guidelines on the design of decision-making algorithms.
I am nowhere purporting to be giving guidelines for the design of a decision-making algorithm. As I said, I am not suggesting any alteration of the UDT formalism. I was also explicit in the OP that there is no problem understanding at an intuitive level what the agent’s builders were thinking when they decided to use UDT.
If all you care about is designing an agent that you can set loose to harvest utility for you, then my post is not meant to be interesting to you.
Beliefs should pay rent, not fly in the ether, unattached to what they are supposed to be about.
The whole Eliezer quote is that beliefs should “pay rent in future anticipations”. Beliefs about which once-possible world is actual do this.
The beliefs in question are yours, and anticipation is about agent’s design or behavior.
The quote applies to humans, I use it as appropriately ported to more formal decision-making, where “anticipated experience” doesn’t generally make sense.
Wrong.
An algorithm may input data from all sources, internal and external. By contrast, an algorithm that only cares whether a decision is “right” can only input data from one source: an internal list of which decisions should be taken.
Thus, describing an algorithm as “concerned only with doing right” means that it will be updateless. Kant’s categorical imperative purports to be updateless in that Kant does not care what level of technology or population we have; according to Kant, a priori considerations about what it means to be human can and should fully determine our actions in all conceivable situations. JS Mill’s utilitarianism purports to deal with real-world consequences in that JS Mill cares a great deal about how things will turn out in practice and refuses to make predictions in advance about what kinds of things will be good for people to do. If I tell you that a decision algorithm Q is “concerned only with doing right,” you know that I might be talking about Kant but that I am definitely not talking about JS Mill. The description “concerned only with doing right” does real explanatory work.
The right thing for the increment algorithm is to output its parameter increased by one.
Yes, you’ve finally got it. I don’t understand why you’re downvoting me for explaining a concept that you had trouble with.
I truly don’t understand, so if one of the six people silently downvoting me would be so kind as to offer a hint, I will gladly edit or delete, as appropriate.
I did not vote on any comments in this post. However, I believe the downvotes were because your tone sounds combative and supercilious, and you missed both Tyrrell’s point and Vladimir’s:
Your description of Tyrrell’s theory makes it sound like it changes the UDT algorithm to a GLUT, while Tyrrell was just proposing a new interpretation of the same algorithm
Vladimir meant his comment about the increment algorithm to show by example that an algorithm which is not updateless can be interpreted as doing something because it’s right just as easily as an updateless algorithm can.
Neither of these would’ve been judged so harshly if you hadn’t phrased your replies like you were addressing a learning-disabled child instead of an intelligent AI researcher.