lol. Not a problem if you’re Jewish ;)
xamdam
No! Not for a second! I immediately began to think how this could have happened. And I realized that the clock was old and was always breaking. That the clock probably stopped some time before and the nurse coming in to the room to record the time of death would have looked at the clock and jotted down the time from that. I never made any supernatural connection, not even for a second. I just wanted to figure out how it happened.
-- Richard P Feynman, on being asked if he thought that the fact that his wife’s favorite clock had stopped the moment she died was a supernatural occurrence, quoted from Al Sekel, “The Supernatural Clock”
This might be stupid (I am pretty new to the site and this possibly has come up before), I had a related thought.
Assuming boxing is possible, here is a recipe for producing an FAI:
Step 1: Box an AGI
Step 2: Tell it to produce a provable FAI (with the proof) if it wants to be unboxed. It will be allowed to carve of a part of universe to itself in the bargain.
Step 3: Examine FAI the best you can.
Step 4: Pray
Correct. I do assume that to maximize whatever, it wants to be unboxed. (If it does not care to be unboxed, it’s at worst an UselessAI).
True about the possibility that the AGI trying to trick you. But from what I understand the goal of SI is to come up with a verifiable FAI. You can specify whatever high standard of verifiability you want as the unboxing condition.
Sounds like good rules of thumb, though one would think DARPA should be using something a little more formal, such as Decision Analysis methodology.
http://decision.stanford.edu/library/the-principles-and-applications-of-decision-analysis-1
For one, value of acquiring information did not make the list. Maybe this was a dumbed-down version.
Nice recap of psychological biases from the Charlie Munger school (of hard knocks and making a billion dollars).
http://www.capitalideasonline.com/articles/index.php?id=3251
Not countersignaling as such, but and interesting related question. If you do a favor for somebody without them knowing, should you tell? On one hand you might feel it’s bragging (so NOT telling might be viewed (or confused with) as countersignaling). On the other, there is a an element of social reciprocity that is a communal benefit, and the information contributes to it. Cialdini (in ‘Yes!’) and the Talmud (do not remember where) specifically say yes, even though Talmud generally places high value on humility.
Hey, a religious cult can be a great vehicle for rationality! (Stranger in Strange Land)
I read the original article and some of the other PJE material. I think he’s really onto something. This is how far I got:
Identify the ’10% controlling part’
Everything else is not under direct control (which is where most self-help methods fail)
It is under indirect control
So far makes sense from personal experience/general knowledge.
Here are my methods for indirect control.
This is the part that I remain skeptical about . Not PJE’s fault, but I do need more data/experience to confirm.
I said what I did mostly for the entertainment value, so essentially I am not going to defend it. I will say that some religions are more rational than others, open to science and “believe” that their faith is logical. Rational (Maimonidean) Judaism has a lot of these ideas, as did Islam as some points. Christianity had their share of rational people, but if you start with nonsense like trinity you cannot get too far IMO.
Can someone please link to the posts in question for the latecomers?
An alternative explanation? You put your energy into solving a practical problem with a large downside (minimizing the loss function in nerdese). Yes, to be perfectly rational you should have said: “the guy is probably lying, but if he is not then...”.
“Are you a Bayesian of a Frequentist”—video lecture by Michael Jordan
Interesting Peter Norvig interview
Transcription is a very good candidate for being crowdsourced—everyone can just to a little bit, especially if you start from speech recognition output.
Just pointing out
I did not manage the interview, just made the introduction; but I will convey your input to the reddit guys
despite lack of transcript the linked reddit thread has some metadata, you can read the question and jump right to the answer. This is not very agonizing.
‘stop having interviews’ - are you serious? seriously? I mean the reddit guys put a lot of work into this; just ask yourself whether you’d rather have the information available or not. It’s like telling any free lecturer to not speak unless they give you the lecture notes first.
Pretty much, with the addition that they are working on the prerequisites for perhaps other reasons, but I suspect with the AGI potential in mind. Do not remember which one of the 3 CEOs said this, but they know that the ultimate search output is NOT pages of ranked links; it is the answer to your search query. This smacks of an AGI-hard problem.
Somewhat related:
“The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. (George Bernard Shaw)”
This somehow reminds me of the stories when Tom Schelling was trying to quit smoking, using game theory against himself (or his other self). The other self in question was not the unconscious, but the conscious “decision-making” self in different circumstances. So that discussion is somewhat orthogonal to this one. I think he did things like promising to give a donation to the American Nazi Party if he smokes. Not sure how that round ended, but he did finally quit.