Gary_Drescher - LessWrong 2.0 viewer
https://www.greaterwrong.com/
Gary_Drescher - LessWrong 2.0 viewerxml-emitteren-usComment by Gary_Drescher on An approach to the Agent Simulates Predictor problem
https://www.greaterwrong.com/posts/5bd75cc58225bf067037514f/an-approach-to-the-agent-simulates-predictor-problem#5bd75cc58225bf0670375172
<p>For the simulation-output variant of ASP, let’s say the agent’s possible actions/outputs consist of all possible simulations Si (up to some specified length), concatenated with “one box” or “two boxes”. To prove that any given action has utility greater than zero, the agent must prove that the associated simulation of the predictor is correct. Where does your algorithm have an opportunity to commit to one-boxing before completing the simulation, if it’s not yet aware that any of its available actions has nonzero utility? (Or would that commitment require a further modification to the algorithm?)</p>
<p>For the simulation-as-key variant of ASP, what principle would instruct a (modified) UDT algorithm to redact some of the inferences it has already derived?</p>
Gary_Drescher5bd75cc58225bf0670375172Fri, 22 Apr 2016 15:20:39 +0000Comment by Gary_Drescher on An approach to the Agent Simulates Predictor problem
https://www.greaterwrong.com/posts/5bd75cc58225bf067037514f/an-approach-to-the-agent-simulates-predictor-problem#5bd75cc58225bf0670375169
<p>Suppose we amend ASP to require the agent to output a full simulation of the predictor before saying “one box” or “two boxes” (or else the agent gets no payoff at all). Would that defeat UDT variants that depend on stopping the agent before it overthinks the problem?</p>
<p>(Or instead of requiring the the agent to output the simulation, we could use the entire simulation, in some canonical form, as a cryptographic key to unlock an encrypted description of the problem itself. Prior to decrypting the description, the agent doesn’t even know what the rules are; the agent is told in advance only that that decryption will reveal the rules.)</p>
Gary_Drescher5bd75cc58225bf0670375169Wed, 20 Apr 2016 16:36:09 +0000Comment by Gary_Drescher on Open Thread, April 27-May 4, 2014
https://www.greaterwrong.com/posts/wR3xa3AgwRWFoat89/open-thread-april-27-may-4-2014#comment-aRvTvonbwdPjQDyC9
<p>According to information his family graciously posted to his blog, the cause of death was occlusive coronary artery disease with cardiomegaly.</p>
<p><a href="http://blog.sethroberts.net/" class="bare-url">http://blog.sethroberts.net/</a></p>Gary_DrescheraRvTvonbwdPjQDyC9Tue, 20 May 2014 17:16:29 +0000Comment by Gary_Drescher on Reflection in Probabilistic Logic
https://www.greaterwrong.com/posts/duAkuSqJhGDcfMaTA/reflection-in-probabilistic-logic#comment-CtRbL4F8PzqxYDJAN
<p>It occurs to me that my references above to “coherence” should be replaced by “coherence & P(T)=1 & reflective consistency”. That is, there exists (if I understand correctly) a P that has all three properties, and that assigns the probabilities listed above. Therefore, those three properties would not suffice to characterize a suitable P for a UDT agent. (Not that anyone has claimed otherwise.)</p>Gary_DrescherCtRbL4F8PzqxYDJANTue, 09 Apr 2013 21:00:04 +0000Comment by Gary_Drescher on Reflection in Probabilistic Logic
https://www.greaterwrong.com/posts/duAkuSqJhGDcfMaTA/reflection-in-probabilistic-logic#comment-stg7j4v8zS5YYRw8h
<p>Wow, this is great work—congratulations! If it pans out, it bridges a really fundamental gap.</p>
<p>I’m still digesting the idea, and perhaps I’m jumping the gun here, but I’m trying to envision a UDT (or TDT) agent using the sense of subjective probability you define. It seems to me that an agent can get into trouble even if its subjective probability meets the coherence criterion. If that’s right, some additional criterion would have to be required. (Maybe that’s what you already intend? Or maybe the following is just muddled.)</p>
<p>Let’s try invoking a coherent P in the case of a simple decision problem for a UDT agent. First, define G <--> P(“G”) < 0.1. Then consider the 5&10 problem:</p>
<ul><li><p>If the agent chooses A, payoff is 10 if ~G, 0 if G.</p>
</li><li><p>If the agent chooses B, payoff is 5.</p>
</li></ul>
<p>And suppose the agent can prove the foregoing. Then unless I’m mistaken, there’s a coherent P with the following assignments:</p>
<p>P(G) = 0.1</p>
<p>P(Agent()=A) = 0</p>
<p>P(Agent()=B) = 1</p>
<p>P(G | Agent()=B) = P(G) = 0.1</p>
<p>And P assigns 1 to each of the following:</p>
<p>P(“Agent()=A”) < epsilon</p>
<p>P(“Agent()=B”) > 1-epsilon</p>
<p>P(“G & Agent()=B”) / P(“Agent()=B”) = 0.1 +- epsilon</p>
<p>P(“G & Agent()=A”) / P(“Agent()=A”) > 0.5</p>
<p>The last inequality is consistent with the agent indeed choosing B, because the postulated conditional probability of G makes the expected payoff given A less than the payoff given B.</p>
<p>Is that P actually incoherent for reasons I’m overlooking? If not, then we’d need something beyond coherence to tell us which P a UDT agent should use, correct?</p>
<p>(edit: formatting)</p>Gary_Drescherstg7j4v8zS5YYRw8hTue, 26 Mar 2013 20:17:57 +0000Comment by Gary_Drescher on The Cognitive Science of Rationality
https://www.greaterwrong.com/posts/xLm9mgJRPvmPGpo7Q/the-cognitive-science-of-rationality#comment-tdR4kfbvETTqp3dNc
<p>If John’s physician prescribed a burdensome treatment because of a test whose false-positive rate is 99.9999%, John needs a lawyer rather than a statistician. :)</p>Gary_DreschertdR4kfbvETTqp3dNcSun, 11 Sep 2011 20:34:25 +0000Comment by Gary_Drescher on Example decision theory problem: "Agent simulates predictor"
https://www.greaterwrong.com/posts/q9DbfYfFzkotno9hG/example-decision-theory-problem-agent-simulates-predictor#comment-M6GHaDyYnpFzPSFug
<blockquote>
<p>In April 2010 Gary Drescher proposed the “Agent simulates predictor” problem, or ASP, that shows how agents with lots of computational power sometimes fare worse than agents with limited resources.</p>
</blockquote>
<p>Just to give due credit: Wei Dai and others had already discussed Prisoner’s Dilemma scenarios that exhibit a similar problem, which I then distilled into the ASP problem.</p>Gary_DrescherM6GHaDyYnpFzPSFugFri, 27 May 2011 13:17:23 +0000Comment by Gary_Drescher on Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory
https://www.greaterwrong.com/posts/etq6bmu3sFjRfzLgt/discussion-for-eliezer-yudkowsky-s-paper-timeless-decision#comment-pMfvpKMyFaAh8W5Ch
<blockquote>
<p>and for an illuminating reason—the algorithm is only run with one set of information</p>
</blockquote>
<p>That’s not essential, though (see the dual-simulation variant in Good and Real).</p>Gary_DrescherpMfvpKMyFaAh8W5ChWed, 12 Jan 2011 14:30:39 +0000Comment by Gary_Drescher on Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory
https://www.greaterwrong.com/posts/etq6bmu3sFjRfzLgt/discussion-for-eliezer-yudkowsky-s-paper-timeless-decision#comment-K94SiM8yTrhsamqgo
<p>Just to clarify, I think your analysis here doesn’t apply to the transparent-boxes version that I presented in Good and Real. There, the predictor’s task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees $1M in the box. (That is, the predictor simulates
what—according to physics—the agent’s configuration would do, if presented with the $1M environment; or equivalently, what the agent’s ‘source code’ returns if called with the $1M argument.)</p>
<p>If the agent would one-box if $1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.</p>Gary_DrescherK94SiM8yTrhsamqgoTue, 11 Jan 2011 21:38:52 +0000Comment by Gary_Drescher on Another attempt to explain UDT
https://www.greaterwrong.com/posts/zztyZ4SKy7suZBpbk/another-attempt-to-explain-udt#comment-QX8k73S38ebP8y9i9
<blockquote>
<blockquote>
<p>2) “Agent simulates predictor”</p>
</blockquote>
<p>This basically says that the predictor is a rock, doesn’t depend on agent’s decision, </p>
</blockquote>
<p>True, it doesn’t “depend” on the agent’s decision in the specific sense of “dependency” defined by currently-formulated UDT. The question (as with any proposed DT) is whether that’s in fact the right sense of “dependency” (between action and utility) to use for making decisions. Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.</p>
<p>[EDIT: fixed typo]</p>Gary_DrescherQX8k73S38ebP8y9i9Thu, 18 Nov 2010 17:54:18 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-ntTpipdXsuizZPzrj
<p>I assume (please correct me if I’m mistaken) that you’re referring to the payout-value as the output of the world program. In that case, a P-style program and a P1-style program can certainly give different outputs for some hypothetical outputs of S (for the given inputs). However, both programs’s payout-outputs will be the same for whatever turns out to be the <em>actual</em> output of S (for the given inputs).</p>
<p>P and P1 have the same causal structure. And they have the same output with regard to (whatever is) the <em>actual</em> output of S (for the given inputs). But P and P1 differ <em>counterfactually</em> as to what the payout-output <em>would be</em> if the output of S (for the given inputs) were different than whatever it actually is.</p>
<p>So I guess you could say that what’s unspecified are the counterfactual consequences of a hypothetical decision, given the (fully specified) physical structure of the scenario. But figuring out the counterfactual consequences of a decision is the main thing that the decision theory itself is supposed to do for us; that’s what the whole Newcomb/Prisoner controversy boils down to. So I think it’s the solution that’s underspecified here, not the problem itself. We need a theory that takes the physical structure of the scenario as input, and generates counterfactual consequences (of hypothetical decisions) as outputs.</p>
<p>PS: To make P and P1 fully comparable, drop the “E*1e9” terms in P, so that both programs model the conventional transparent-boxes problem without an extraneous pi-preference payout.</p>Gary_DrescherntTpipdXsuizZPzrjSun, 28 Feb 2010 20:40:19 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-zf7q3EdNL25AD2Wyz
<p>My concern is that there may be several world-programs that correspond faithfully to a given problem description, but that correspond to different analyses, yielding different decision prescriptions, as illustrated by the P1 example above. (Upon further consideration, I should probably modify P1 to include “S()=S1()” as an additional input to S and to Omega_Predict, duly reflecting that aspect of the problem description.)</p>Gary_Drescherzf7q3EdNL25AD2WyzSun, 28 Feb 2010 18:22:20 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-yPk5XMrK2kX2WfTXd
<p>That’s very elegant! But the trick here, it seems to me, lies in the rules for setting up the world program in the first place. </p>
<p>First, the world-program’s calling tree should match the structure of TDT’s graph, or at least match the graph’s (physically-)causal links. The physically-causal part of the structure tends to be uncontroversial, so (for present purposes) I’m ok with just stipulating the physical structure for a given problem.</p>
<p>But then there’s the choice to use the same variable S in multiple places in the code. That corresponds to a choice (in TDT) to splice in a logical-dependency link from the Platonic decision-computation node to other Platonic nodes. In both theories, we need to be precise about the criteria for this dependency. Otherwise, the sense of dependency you’re invoking might turn out to be wrong (it makes the theory prescribe incorrect decisions) or question-begging (it implicitly presupposes an answer to the key question that the theory itself is supposed to figure out for us, namely what things are or are not counterfactual consequences of the decision-computation).</p>
<p>So the question, in UDT1, is: under what circumstances do you represent two real-world computations as being tied together via the same variable in a world-program?</p>
<p>That’s perhaps straightforward if S is implemented by literally the same physical state in multiple places. But as you acknowledge, you might instead have distinct Si’s that diverge from one another for some inputs (though not for the actual input in this case). And the different instances need not have the same physical substrate, or even use the same algorithm, as long as they give the same answers when the relevant inputs are the same, for some mapping between the inputs and between the outputs of the two Si’s. So there’s quite a bit of latitude as to whether to construe two computations as “logically equivalent”.</p>
<p>So, for example, for the conventional transparent-boxes problem, what principle tells us to formulate the world program as you proposed, rather than having:</p>
<pre><code>def P1(i):
const S1;
E = (Pi(i) == 0)
D = Omega_Predict(S1, i, "box contains $1M")
if D ^ E:
C = S(i, "box contains $1M")
payout = 1001000 - C * 1000
else:
C = S(i, "box is empty")
payout = 1000 - C * 1000
</code></pre><p>(along with a similar program P2 that uses constant S2, yielding a different output from Omega_Predict)?</p>
<p>This alternative formulation ends up telling us to two-box. In this formulation, if S and S1 (or S and S2) are in fact the same, they would (counterfactually) differ if a different answer (than the actual one) were output from S—which is precisely what a causalist asserts. (A similar issue arises when deciding what facts to model as “inputs” to S—thus forbidding S to “know” those facts for purposes of figuring out the counterfactual dependencies—and what facts to build instead into the structure of the world-program, or to just leave as implicit background knowledge.)</p>
<p>So my concern is that UDT1 may covertly beg the question by selecting, among the possible formulations of the world-program, a version that turns out to presuppose an answer to the very question that UDT1 is intended to figure out for us (namely, what counterfactually depends on the decision-computation). And although I agree that the formulation you’ve selected in this example is correct and the above alternative formulation isn’t, I think it remains to explain why.</p>
<p>(As with my comments about TDT, my remarks about UDT1 are under the blanket caveat that my grasp of the intended content of the theories is still tentative, so my criticisms may just reflect a misunderstanding on my part.)</p>Gary_DrescheryPk5XMrK2kX2WfTXdSun, 28 Feb 2010 16:10:30 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-NuTsZ7vzbtAfbdYe6
<p>Ok. I think it would be very helpful to sketch, all in one place, what TDT2 (i.e., the envisioned avenue-2 version of TDT) looks like, taking care to pin down any needed sense of “dependency”. And similarly for TDT1, the avenue-1 version. (These suggestions may be premature, I realize.)</p>Gary_DrescherNuTsZ7vzbtAfbdYe6Sun, 07 Feb 2010 12:33:27 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-vuaQ5875JRfcKcntX
<blockquote>
<p>The link between the Platonic decision C and the physical decision D </p>
</blockquote>
<p>No, D was the Platonic simulator. That’s why the nature of the C->D dependency is crucial here.</p>Gary_DreschervuaQ5875JRfcKcntXSun, 07 Feb 2010 00:43:23 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-uY9Z77HD8vj6JFAmT
<blockquote>
<p>No, but whenever we see a <em>physical</em> fact F that depends on a decision C/D we’re still in the process of making plus Something Else (E), </p>
</blockquote>
<p>Wait, F depends on decision computation C in what sense of “depends on”? It can’t quite be the originally defined sense (quoted from your email near the top of the OP), since that defines dependency between Platonic computations, not between a Platonic computation and a physical fact. Do you mean that D depends on C in the original sense, and F in turn depends on D (and on E) in a different sense?</p>
<blockquote>
<p>then we express our uncertainty in the form of a <em>causal</em> graph with directed arrows from C to D, D to F, and E to F. </p>
</blockquote>
<p>Ok, but these arrows can’t be used to define the relevant sense of dependency above, since the relevant sense of dependency is what tells us we need to draw the arrows that way, if I understand correctly.</p>
<p>Sorry to keep being pedantic about the meaning of “depends”; I know you’re in thinking-out-loud mode here. But the theory gives wildly different answers depending (heh) on how that gets pinned down.</p>Gary_DrescheruY9Z77HD8vj6JFAmTSun, 07 Feb 2010 00:02:10 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-tEj7hRWMX6fkdnETc
<blockquote>
<p>If we go down avenue (1), then we give primacy to our intuition that if-counterfactually you make a different decision, this logically controls the mathematical fact (D xor E) with E held constant, but does not logically control E with (D xor E) held constant. While this does sound intuitive in a sense, it isn’t quite nailed down—after all, D is ultimately just as constant as E and (D xor E), and to change any of them makes the model equally inconsistent. </p>
</blockquote>
<p>I agree this sounds intuitive. As I mentioned earlier, though, nailing this down is tantamount to circling back and solving the full-blown problem of (decision-supporting) counterfactual reasoning: the problem of how to distinguish which facts to “hold fixed”, and which to “let vary” for consistency with a counterfactual antecedent.</p>
<p>In any event, is the idea to try to build a separate graph for math facts, and use that to analyze “logical dependency” among the Platonic nodes in the original graph, in order to carry out TDT’s modified “surgical alteration” of the original graph? Or would you try to build one big graph that encompasses physical and logical facts alike, and then use Pearl’s decision procedure without further modification?</p>
<blockquote>
<p>If we view the physical observation of $1m as telling us the raw mathematical fact (D xor E), and then perform mathematical inference on D, we’ll find that we can affect E, which is not what we want. </p>
</blockquote>
<p>Wait, isn’t it decision-computation C—rather than simulation D—whose “effect” (in the sense of logical consequence) on E we’re concerned about here? It’s the logical dependents of C that get surgically altered in the graph when C gets surgically altered, right? (I know C and D are logically equivalent, but you’re talking about inserting a physical node after D, not C, so I’m a bit confused.)</p>
<p>I’m having trouble following the gist of avenue (2) at the moment. Even with the node structure you suggest, we can still infer E from C and from the physical node that matches (D xor E)—unless the new rule prohibits relying on that physical node, which I guess is the idea. But what exactly is the prohibition? Are we forbidden to infer any mathematical fact from any physical indicator of that fact? Or is there something in particular about node (D xor E) that makes it forbidden? (It would be circular to cite the node’s dependence on C in the very sense of “dependence” that the new rule is helping us to compute.)</p>Gary_DreschertEj7hRWMX6fkdnETcSat, 06 Feb 2010 16:27:33 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-uk5dN5oJ3iiYQG4fP
<blockquote>
<p> I already saw the $1M, so, by two-boxing, aren’t I just choosing to be one of those who see their E module output True?</p>
</blockquote>
<p>Not if a counterfactual consequence of two-boxing is that the large box (probably) would be empty (even though in fact it is not empty, as you can already see).</p>
<p>That’s the same question that comes up in the original transparent-boxes problem, of course. We probably shouldn’t try to recap that whole debate in the middle of this thread. :)</p>Gary_Drescheruk5dN5oJ3iiYQG4fPFri, 05 Feb 2010 19:17:00 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-yudWDfgPPPwfmQzsv
<blockquote>
<p>2) Treat differently mathematical knowledge that we learn by genuinely mathematical reasoning and by physical observation. In this case we know (D xor E) not by mathematical reasoning, but by physically observing a box whose state we believe to be correlated with D xor E. This may justify constructing a causal DAG with a node descending from D and E, so a counterfactual setting of D won’t affect
the setting of E.</p>
</blockquote>
<p>Perhaps I’m misunderstanding you here, but D and E are Platonic computations. What does it mean to construct a causal DAG among Platonic computations? [EDIT: Ok, I may understand that a little better now; see my edit to my reply to (1).] Such a graph links together general mathematical facts, so the same issues arise as in (1), it seems to me: Do the links correspond to logical inference, or something else? What makes the graph acyclic? Is mathematical causality even coherent? And if you did have a module that can detect (presumably timeless) causal links among Platonic computations, then why not use that module directly to solve your decision problems?</p>
<p>Plus I’m not convinced that there’s a meaningful distinction between math knowledge that you gain by genuine math reasoning, and math knowledge that you gain by physical observation.</p>
<p>Let’s say, for instance, that I feed a particular conjecture to an automatic theorem prover, which tells me it’s true. Have I then learned that math fact by genuine mathematical reasoning (performed by the physical computer’s Platonic abstraction)? Or have I learned it by physical observation (of the physical computer’s output), and hence be barred from using that math fact for purposes of TDT’s logical-dependency-detection? Presumably the former, right? (Or else TDT will make even worse errors.)</p>
<p>But then suppose the predictor has simulated the universe sufficiently to establish that U (the universe’s algorithm, including physics and initial conditions) leads to there being $1M in the box in this situation. That’s a mathematical fact about U, obtained by (the simulator’s) mathematical reasoning. Let’s suppose that when the predictor briefs me, the briefing includes mention of this mathematical fact. So even if I keep my eyes closed and never physically see the $1M, I can rely instead on the corresponding mathematically derived fact.</p>
<p>(Or more straightforwardly, we can view the universe itself as a computer that’s performing mathematical reasoning about how U unfolds, in which case any physical observation is intrinsically obtained by mathematical reasoning.)</p>Gary_DrescheryudWDfgPPPwfmQzsvFri, 05 Feb 2010 18:24:51 +0000Comment by Gary_Drescher on A problem with Timeless Decision Theory (TDT)
https://www.greaterwrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt#comment-9WjXcWnkA2N94GgMg
<blockquote>
<p>1) Construct a full-blown DAG of math and Platonic facts, an account of which mathematical facts make other mathematical facts true, so that we can compute mathematical counterfactuals. </p>
</blockquote>
<p>“Makes true” means logically implies? Why would that graph be acyclic?
[EDIT: Wait, maybe I see what you mean. If you take a pdf of your beliefs about various mathematical facts, and run Pearl’s algorithm, you should be able to construct an acyclic graph.]</p>
<p>Although I know of no worked-out theory that I find convincing, I believe that counterfactual inference (of the sort that’s appropriate to use in the decision computation) makes sense with regard to events in universes characterized by certain kinds of physical laws. But when you speak of mathematical counterfactuals more generally, it’s not clear to me that that’s even coherent.</p>
<p>Plus, if you did have a general math-counterfactual-solving module, why would you relegate it to the logical-dependency-finding subproblem in TDT, and then return to the original factored causal graph? Instead, why not cast the whole problem as a mathematical abstraction, and then directly ask your math-counterfactual-solving module whether, say, (Platonic) C’s one-boxing counterfactually entails (Platonic) $1M? (Then do the argmax over the respective math-counterfactual consequences of C’s candidate outputs.)</p>Gary_Drescher9WjXcWnkA2N94GgMgFri, 05 Feb 2010 17:03:45 +0000