I, too, want to save the world.
dvasya
Handle: dvasya (from Darth Vasya)
Name: Vasilii Artyukhov
Location: Houston, TX (USA)
Age: 26
Occupation: physicist doing computational nanotechnology/materials science/chemistry, currently on a postdoctoral position at Rice University. Also remotely connected to the anti-aging field, as well as cryopreservation. Not personally interested in AI because I don’t understand it very well (though I do appreciate its importance adequately), but who knows—maybe that could change with prolonged exposure to LW :)
This seems sort of similar to the famous “Quantum suicide” concept, although less elegant :-)
We can view Time 2 as the result of an experiment where 8 copies of you start the game, are assigned to two groups W and L (4 copies in each), then a coin is tossed, and if it comes up heads, than we kill 75% of the L group. After such an experiment, you definitely should expect yourself being in the W group given that you’re alive with a 80% probability. Time 3 (not shown in your figure) corresponds to a second round of the experiment, when no coin is tossed, but simply 75% of the W group are exterminated. So if you’re doing a single suicide, expect to be in W with 80% probability, but if you’re doing a double-suicide experiment, then expect a 50% probability of being in W. If before exterminating the W group you dump all their memories into the one surviving copy, then expect a 50% chance of being in L and a 50% chance of being in W, but with four times as much memories of having been in W after the first stage of the experiment. Finally, you can also easily calculate all probabilities if you find yourself in group W after the first suicide, but are unsure as to which version of the game you’re playing. I think that’s what Bostrom’s answer corresponds to.
The suicide experiment is designed more clearly, and it helps here. What would change if you had an exactly 100% probability of winning (the original Quantum suicide)? What if vice versa? And if you have a non-zero probability of either outcome, what if you just view it as the appropriately weighted sum of the two extremes?
Hilarious and 100% true! Thank you! The only thing I might add to this is that in Huve’s theory, information is created out of nowhere.
Sorry for length, but this a nice sketch on the role of rationality in science :)
A glossary for research reports
Scientific term (Actual meaning)
It has long been known that. . . . (I haven’t bothered to look up the original reference)
. . . of great theoretical and practical importance (. . . interesting to me)
While it has not been possible to provide definite answers to these questions . . . (The experiments didn’t work out, but I figured I could at least get a publication out of it)
The W-Pb system was chosen as especially suitable to show the predicted behaviour. . . . (The fellow in the next lab had some already made up)
High-purity || Very high purity || Extremely high purity || Super-purity || Spectroscopically pure . . . (Composition unknown except for the exaggerated claims of the supplier)
A fiducial reference line . . . (A scratch)
Three of the samples were chosen for detailed study . . . (The results on the others didn’t make sense and were ignored)
. . . accidentally strained during mounting (. . . dropped on the floor)
. . . handled with extreme care throughout the experiments (. . . not dropped on the floor)
Typical results are shown . . . (The best results are shown)
Although some detail has been lost in reproduction, it is clear from the original micrograph that . . . (It is impossible to tell from the micrograph)
Presumably at longer times . . . (I didn’t take time to find out)
The agreement with the predicted curve is excellent (fair) || good (poor) || satisfactory (doubtful) || fair (imaginary) || . . as good as could be expected (non-existent)
These results will be reported at a later date (I might possibly get around to this sometime)
The most reliable values are those of Jones (He was a student of mine)
It is suggested that || It is believed that || It may be that . . . (I think)
It is generally believed that . . . (A couple of other guys think so too)
It might be argued that . . . (I have such a good answer to this objection that I shall now raise it)
It is clear that much additional work will be required before a complete understanding . . . (I don’t understand it)
Unfortunately, a quantitative theory to account for these effects has not been formulated (Neither does anybody else)
Correct within an order of magnitude (Wrong)
It is to be hoped that this work will stimulate further work in the field (This paper isn’t very good, but neither are any of the others in this miserable subject)
Thanks are due to Joe Glotz for assistance with the experiments and to John Doe for valuable discussions (Glotz did the work and Doe explained what it meant)
C. D. Graham, Jr., Metal. Progress 71, 75 (1957) (actual source)
Future possibilities will often resemble today’s fiction, just as robots, spaceships, and computers resemble yesterday’s fiction. How could it be otherwise? Dramatic new technologies sound like science fiction because science fiction authors, despite their frequent fantasies, aren’t blind and have a professional interest in the area.
...
This may seem too good to be true, but nature (as usual) has not set her limits based on human feelings.
K. Eric Drexler, Engines of Creation Chapter 6
Arthur C. Clarke?
I don’t think this was intended as a criticism of science… ;)
Actually, +1 for William Gibson!
‘Dyson sphere’ is a very broad term encompassing several distinct types of design, including very light ones.
Space elevator is awesome, but there exist much more clever alternative designs that have substantially lower requirements for material strength, as well as geographical positioning—this is also a huge issue with the original space elevator design. It is a beautiful idea, but that doesn’t mean we should cling to it and ignore all other proposals :)
I started assembling links but then realized that Wikipedia is a good starting point, it has provides a nice summary of all the most notable designs: tethers, bolas, orbital rings, pneumatic towers, the Lofstrom Loop… Each has its own drawbacks, but the important thing is that they do not require nonexistent (even if theoretically possible) materials.
Clever ways to get to space are often covered at Next Big Future, including the author’s own nuclear cannon proposal—this one actually literally follows Jules Verne :-)
Nice. Saw the presentation of the project this May at the Suspended Animation conference, been wondering where they’re building it.
The project apparently aims at three goals: 1) centralized storage of patients, 2) centralized research labs, 3) showing off.
Goal 3 seems dominant, those guys think that “cryonics with class” can attract more wealthy people with more resources that can go into the development of revival technology. But they do want to build some labs there, too. Oh yes, and the main person behind this project is Saul Kent.
So… how did it go?
minds are behavior-executors and not utility-maximizers
May I request that you put this in boldface?
tied so intimately to … (the easy problem, not the hard one), that one would hope that a new approach to one...
The diverse use of the word “one” in this sentence makes it amusingly perplexing on the first read or two (at least, for a non-native speaker such as Y.T.) :)
A funny example of how two different complex adaptations can stably coexist within the same sex is Sepia apama, although this does involve cross-dressing. (I learned this from the BBC Life TV series, which is just amazing. I strongly recommend it to everybody.)
I’m pointing out that guilt as a signal you won’t (can’t) defect is made useless by having a system to remove guilt.
It’s not like John is removing his own guilt for breaking Lisa’s neck and taking the antidote by making himself believe that what really happened is that she actually gave it to him in his coffee and then died of poisoning. Here, guilt is removed sort of from the outside, by the society, which actually seems to make sense from the point of view of removing false (from the society’s point of view) positives, while keeping all the social benefits of guilt. But, true, this mechanism can be exploited via making up imaginary friends.
Looks like it’s going to be in the season finale: http://en.wikipedia.org/wiki/Futurama_(season_6)#Episodes (“Overclockwise”)
Is something wrong with the RSS feed? The last posts I can see there are dated early June.
Thank you very much Eneasz, it’s really awesome!