Is there any good nonfiction books on cryonics? All I could find is this one http://www.amazon.com/Freezing-People-Not-Easy-Adventures/dp/0762792957/ref=sr_1_1?ie=UTF8&qid=1441303378&sr=8-1&keywords=cryonics . I started to read it but it is more historical and autobiographical. Also, do you think there would be demand for a well researched book on cryonics for general audiences?
SodaPopinski
In the same vein, I would highly recommend John Maynard Smith’s “Evolution and the Theory of Games”. It has many highly motivated examples of Game Theory in Biology by a real biologist. The later chapters get dense but the first half is readable with a basic knowledge of calculus (which was in fact my background when I first picked up this book).
Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?
Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn’t seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.
This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI’s.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher… https://www.youtube.com/watch?v=KQ35zNlyG-o
What is the best way to handle police interaction in countries you don’t live in? In the US it is generally considered pretty wise to exercise your right to be silent extensively. Now obviously in some really corrupt places your just going to have to go along with whatever they want. But what about the different countries in Europe? My instinct would be to respectfully tell the officer I would like to call my embassy (and have that number with me!).
What is the current status on formalizing timeless decision theory? I am new to LW, and have a mathematics background and would like to work on decision theory (in the spirit of LW). However, all I can find is some old posts (2011) of Eliezer saying that write ups are in process, as well as a 120 page report by Eliezer from MIRI which is mostly discussing TDT in words as well as the related philosophical problems. Is there a formal self contained definition of TDT out there?
This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.
To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I’ll be fired) barring some 1/1000 likelihood quantum event. No problem, I’ll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I’m saved since I’m almost certainly in one of these simulations I’m about to make!
What is a computation? Intuitively some (say binary) states of the physical world are changed, voltage gates switched, rocks moved around (https://xkcd.com/505/), whatever.
Now, in general if these physical changes were done with some intention like in my CPU or the guy moving the rocks in the xkcd comic, then I think of this as a computation, and consequentially I would care for example about if the computation I performed simulated a conscious entity.However, surely my or my computer’s intention can’t be what makes the physical state changes count as a computation. But then how do we get around the slippery slope where everything is computing everything imaginable. There are billions of states I can interpret as 1′s and 0′s which get transformed in countless different ways every time I stir my coffee. Even worse, in quantum mechanics, the state of a point is given by a potentially infinitely wiggly function. What stops me from interpreting all of this as computation which under some encoding gives rise to countless Boltzmann brain type conscious entities and simulated worlds?
Can we use the stock market itself as a useful prediction market in any way? For example can we get useful information about how long Moore’s law type growth in microprocessors will likely continue based on how much the market values certain companies? Or are there too many auxiliary factors, so that reverse engineering anything interesting from price information is hopeless?
(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking’s fire into the equations).
Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like us, then it would be easy for me to interpret their world and their mind etc. as real (even if I could never experience it viscerally living outside the simulation). However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist. So it seems to me if we believe some simulation is real there is no obvious barrier to believing every (computable) universe exists. As for whether we stop at computable universes or include more of mathematics, I am not sure anything we would call conscious could tell the difference, so perhaps it makes no difference.
(Resulting beliefs + aside on decision theory) I believe in a Level 4 Tegmark with no reality fluid measure (as I have yet to see a convincing argument for one) a la http://lesswrong.com/r/discussion/lw/jn2/preferences_without_existence/ . Moreover, I don’t think there is any “correct” decision theory that captures what we should be doing. All we can do is pick the one that feels right with regard to our biological programming. Which future entities are us, how many copies of us will there be, and who should I care about etc. are all flaky concepts at best. Of course, my brain won’t buy into the idea I should jump off a bridge or touch a hot stove, but I think it is unplausable that this will follow from any objective optimization principle. Nature didn’t need a decision theory to decide if it is a good idea to walk into a teleporter machine if two of us walk out the other side. We have our built in shabby biological decision theory, we can innovate on it theoretically, but there is no objective sense in which some particular decision theory will be the right one for us.
CellBioGuy all your astrobiology posts are great I’d be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern’s post about there not being complexity limitations preventing runaway self-improving agents?
Where can I find the most coherent anti-FOOM argument (outside of the FOOM debate)? [That is, I’m looking for arguments for the possibility of not having an intelligence explosion if we reach near human level AI, the other side is pretty well covered on LW.]
Is it useful to think about the difference between ‘physically possible’ i.e. obeying the laws of physics and possible to engineer? In computer science there is something like this. You have things which can’t be done on a turing machine (e.g. halting problem). But then you have things which we may never be able to arrange the atoms in the universe to do, such as large cases of NP-hard problems.
So what about in physics? I have seen the argument that if we set loose a paperclip maximizer on earth, then we might doom the rest of the observable universe. But maybe there is simply no sequence of steps that even a super brilliant AI could take to arrange matter in such a way as to say move 1000kg at 98% the speed of light. Anyway, I am curious if this kind of thinking is developed somewhere.
The problem is the mental construct of “I”. Yes we can’t help but believe that there is feeling, thinking, subjective experience etc. The problem is that our brain seems to naturally construct a concept of “I” which is a sort of owner of these subjective experiences that persists over time. This construct, while deeply engrained and probably useful, is not consistent with physical reality. This can be seen either with teleporter type thought experiments or to some extent with real life cases of brain trauma (for example in Oliver Sacks’s or Ramachandran’s books). Our brains’ care about protecting some potential future entities, which barring crazy technology or anthropic scenarios are easy to specify, but there is not going to be a coherent general principle to decide when we should count potential future entities as being us.
I believe Dyson is saying there could indeed by an infinite amount. Here is a wikipedia article about it https://en.wikipedia.org/wiki/Dyson%27s_eternal_intelligence and the article itself http://www.aleph.se/Trans/Global/Omega/dyson.txt
This is a very interesting part of an interview with Freeman Dyson where he talks about how computation could go on forever even if the universe faces a heat death scenario. https://www.youtube.com/watch?v=3qo4n2ZYP7Y
The idea of a persistent personal identity has no physical basis. I am not questioning consciousness only saying that the mental construct that there is an ownership to some particular sequence of conscious feelings over time is inconsistent with reality (as I would argue all the teleporter-type thought experiments show). So in my view all that matters is how much a certain entity X decides (or instinctually feels) it should care about some similar seeming later entity Y.
How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?
One part is writing down whatever dreams I can remember right upon awaking. This has led to me occasionally experiencing lucid dreams without really trying.
Also since I am writing dreams anyway, this makes it easy to do the other writing which I find beneficial. Namely, writing the major plan of the day and gratitude stuff.
Are there things we should be doing now to take advantage of future technology. What I mean would be something like people who bank umbilical cord fluid for potential future stem cell usages. Another example would be if we had taken a lot of pictures of a historical building which is now gone, then we could use modern day photogrammetry to make a 3d model of it. A potential current example, suppose we recorded a ton of our day to day vocal communication. Then, some day in the future, a new machine learning algorithm could make use of the data. So what I am looking for is whether there are any potential ‘missed opportunity’ of this type we should be considering (posted similar question on futurology subreddit).