This is one of those things that seems obvious but it did cause some things to click for me that I hadn’t thought of before. Previously my idea of AGI becoming uncontrollable was basically that somebody would make a superintelligent AGI in a box, and we would be able to unplug it anytime we wanted, and the real danger would be the AGI tricking us into not unplugging it and letting it out of the box instead. What changed this view was this line: “Try to unplug Bitcoin.” Once you think of it that way it does seem pretty obvious that the most powerful algorithms, the ones that would likely first become superintelligent, would be distributed and fault-tolerant, as you say, and therefore would not be in a box of any kind to begin with.
Andrew Jacob Sauer
Sorry to necro this here, but I find this topic extremely interesting and I keep coming back to this page to stare at it and tie my brain in knots. Thanks for your notes on how it works in the logically uncertain case. I found a different objection based on the assumption of logical omniscience:
Regarding this you say:
Perhaps you think that the problem with the above version is that I assumed logical omniscience. It is unrealistic to suppose that agents have beliefs which perfectly respect logic. (Un)Fortunately, the argument doesn’t really depend on this; it only requires that the agent respects proofs which it can see, and eventually sees the Löbian proof referenced.
However, this assumes that the Löbian proof exists. We show that the Löbian proof of A=cross→U=−10 exists by showing that the agent can prove □(A=cross→U=−10)→(A=cross→U=−10), and the agent’s proof seems to assume logical omniscience:
Examining the agent, either crossing had higher expected utility, or P(cross)=0. But we assumed □(A=cross→U=−10), so it must be the latter. So the bridge gets blown up.
If □ here means “provable in PA”, the logic does not follow through if the agent is not logically omniscient: the agent might find crossing to have a higher expected utility regardless, because it may not have seen the proof. If □ here instead means “discoverable by the agent’s proof search” or something to that effect, then the logic here seems to follow through (making the reasonable assumption that if the agent can discover a proof for A=cross->U=-10, then it will set its expected value for crossing to −10). However, that would mean we are talking about provability in a system which can only prove finitely many things, which in particular cannot contain PA and so Löb’s theorem does not apply.
I am still trying to wrap my head around exactly what this means, since your logic seems unassailable in the logically omniscient case. It is counterintuitive to me that the logically omniscient agent would be susceptible to trolling but the more limited one would not. Perhaps there is a clever way for the troll to get around this issue? I dunno. I certainly have no proof that such an agent cannot be trolled in such a way.
Suppose you learn about physics and find that you are a robot. You learn that your source code is “A”. You also believe that you have free will; in particular, you may decide to take either action X or action Y.
My motivation for talking about logical counterfactuals has little to do with free will, even if the philosophical analysis of logical counterfactuals does.
The reason I want to talk about logical counterfactuals is as follows: suppose as above that I learn that I am a robot, and that my source code is “A”(which is presumed to be deterministic in this scenario), and that I have a decision to make between action X and action Y. In order to make that decision, I want to know which decision has better expected utility. The problem is that, in fact, I will either choose X or Y. Suppose without loss of generality that I will end up choosing action X. Then worlds in which I choose Y are logically incoherent, so how am I supposed to reason about the expected utility of choosing Y?
Funny you mention AlphaGo, since the first time AlphaGo(or indeed any computer) beat a professional go player(Fan Hui), it was distributed across multiple computers. Only later did it become strong enough to beat top players with only a single computer.
Seems to me that if an agent with a reasonable heuristic for logical uncertainty came upon this problem, and was confident but not certain of its consistency, it would simply cross because expected utility would be above zero, which is a reason that doesn’t betray an inconsistency. (Besides, if it survived it would have good 3rd party validation of its own consistency, which would probably be pretty useful.)
- 26 Aug 2019 20:40 UTC; 4 points) 's comment on Troll Bridge by (
It’s hard to tell, since while common sense is sometimes wrong, it’s right more often than not. An idea being common sense shouldn’t count against it, even though like the article said it’s not conclusive.
Rot13:
Gur vzcnpg bs na rirag ba lbh vf gur qvssrerapr orgjrra gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba tvira pregnvagl gung gur rirag jvyy unccra, naq gur pheerag rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba.
Zber sbeznyyl, jr fnl gung gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba vf gur fhz, bire nyy cbffvoyr jbeyqfgngrf K, bs C(K)*H(K), juvyr gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba tvira pregnvagl gung n fgngrzrag R nobhg gur jbeyq vf gehr vf gur fhz bire nyy cbffvoyr jbeyqfgngrf K bs C(K|R)*H(K). Gur vzcnpg bs R orvat gehr, gura, vf gur nofbyhgr inyhr bs gur qvssrerapr bs gubfr gjb dhnagvgvrf.
Non-Archimedean utility functions seem kind of useless to me. Since no action is going to avoid moving the probability of any outcome by more than 1/3^^^3, absolutely any action is important only insomuch as it impacts the highest lexical level of utility. So you might as well just call that your utility function.
What are the rules about program runtime?
When “pure thought” tells you that 1 + 1 = 2, “independently of any experience or observation”, you are, in effect, observing your own brain as evidence.
I mean, yeah? You can still do that in your armchair, without looking at anything outside of yourself. Mathematical facts are indeed “discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe,” if you modify the statement a little to say “anywhere else existent” in order to acknowledge that the operation of thought indeed exists in the universe. Do mathematical facts exist independently of the universe? Maybe, maybe not, it probably depends what you mean by “exist” and it doesn’t really matter to anyone since either way, you can’t discover any mathematical facts without using your brain, which is in the universe. So there’s no observable difference between whether Platonic math exists or not.
“free will” is a useful concept which should be kept, even though it has been used to refer to nonsensical things. Just because one can’t will what he wills, doesn’t mean we shouldn’t be able to talk about willing what you do. Similarly, just because you can’t get knowledge without thinking, doesn’t mean we shouldn’t be able to use “a priori knowledge” to talk about getting knowledge without looking.
I think an important consideration is the degree of catastrophe. Even the asteroid strike, which is catastrophic to many agents on many metrics, is not catastrophic on every metric, not even every metric humans actually care about. An easy example of this is prevention of torture, which the asteroid impact accomplishes quite smoothly, along with almost every other negative goal. The asteroid strike is still very bad for most agents affected, but it could be much, much worse, as with the “evil” utility function you alluded to, which is very bad for humans on every metric, not just positive ones. Calling both of these things a “catastrophe” seems to sweep that difference under the rug.
With this in mind, “catastrophe” as defined here seems to be less about negative impact on utility, and more about wresting of control of utility function away from humans. Which seems bound to happen even in the best case where a FAI takes over. It seems a useful concept if that is what you are getting at but “catastrophe” seems to have confusing connotations, as if a “catastrophe” is necessarily the worst thing possible and should be avoided at all costs. If an antialigned “evil” AI were about to be released with high probability, and you had a paperclip maximizer in a box, releasing the paperclip maximizer would be the best option, even though that moves the chance of catastrophe from high probability to indistinguishable from certainty.
We’re talking about the impact of an event though. The very question is only asking about worlds where the event actually happens.
If I don’t know whether an event is going to happen and I want to know the impact it will have on me, I compare futures where the event happens to my current idea of the future, based on observation(which also includes some probability mass for the event in question, but not certainty).
In summary, I’m not updating to “X happened with certainty” rather I am estimating the utility in that counterfactual case.
The proof doesn’t work on a logically uncertain agent. The logic fails here:
Examining the source code of the agent, because we’re assuming the agent crosses, either PA proved that crossing implies U=+10, or it proved that crossing implies U=0.
A logically uncertain agent does not need a proof of either of those things in order to cross, it simply needs a positive expectation of utility, for example a heuristic which says that there’s a 99% chance crossing implies U=+10.
Though you did say there’s a version which still works for logical induction. Do you have a link to where I can see that version of the argument?
Edit: Now I still see the logic. On the assumption that the agent crosses but also proves that U=-10, the agent must have a contradiction somewhere, because that, and the logical uncertainty agents I’m aware of have a contradiction upon proving U=-10 because they prove that they will not cross, and then immediately cross in a maneuver meant to prevent exactly this kind of problem.
Wait but proving crossing implies U=-10 does not mean that prove they will not cross, exactly because they might still cross, if they have a contradiction.
God this stuff is confusing. I still don’t think the logic holds though.
That’s the funniest thing I’ve seen all day.
Is everybody’s code going to be in Python?
Because assuming Provable(C)->C as a hypothesis doesn’t allow you to prove C. Rather, the fact that a proof exists of Provable(C)->C allows you to construct a proof of C.
That’s beside the point. In the first case you’d take 1A in the first game, and 2A in the 2nd game(34% chance of living is better than 33%). In the 2nd case, if you bothered to play at all, you’d probably take 1B/2B. What doesn’t make sense is taking 1A and 2B. That policy is inconsistent no matter how you value different amounts of money (unless you don’t care about money at all in which case do whatever, the paradox is better illustrated with something you do care about) so things like risk, capital cost, diminishing returns etc are beside the point.
In this case the only reason the money pumping doesn’t work is because Omega is unable to choose its policy based on its prediction of your second decision: If it could, you would want to switch back to b, because if you chose a, Omega would know that and you’d get 0 payoff. This makes the situation after the coinflip different from the original problem where Omega is able to see your decision and make its decision based on that.
In the Allais problem as stated, there’s no particular reason why the situation where you get to choose between $24,000, or $27,000 with 33⁄34 chance, differs depending on whether someone just offered it to you, or if they offered it to you only after you got <=34 on a d100.
Thanks for the link, I will check it out!
My worry with automation isn’t that it will destroy the intrinsic value of human endeavors, rather that it will destroy the economic value of the average person’s endeavors. I agree that human art is still valuable even if AI can make better art. My concern is that under the current system of production where people must contribute to society in a competitive way in order to secure an income and a living for themselves, full automation will be materially harmful to everyone who doesn’t own the automated systems.