Suppose we believe that stock market prices are very good aggregators of information about companies future returns. What would be the signs that the “big money” is predicting (a) a positive postscarcity type singularity event or (b) an apocalypse scenario AI induced or otherwise?
SodaPopinski
This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI’s.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher… https://www.youtube.com/watch?v=KQ35zNlyG-o
Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?
Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn’t seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.
(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking’s fire into the equations).
Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like us, then it would be easy for me to interpret their world and their mind etc. as real (even if I could never experience it viscerally living outside the simulation). However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist. So it seems to me if we believe some simulation is real there is no obvious barrier to believing every (computable) universe exists. As for whether we stop at computable universes or include more of mathematics, I am not sure anything we would call conscious could tell the difference, so perhaps it makes no difference.
(Resulting beliefs + aside on decision theory) I believe in a Level 4 Tegmark with no reality fluid measure (as I have yet to see a convincing argument for one) a la http://lesswrong.com/r/discussion/lw/jn2/preferences_without_existence/ . Moreover, I don’t think there is any “correct” decision theory that captures what we should be doing. All we can do is pick the one that feels right with regard to our biological programming. Which future entities are us, how many copies of us will there be, and who should I care about etc. are all flaky concepts at best. Of course, my brain won’t buy into the idea I should jump off a bridge or touch a hot stove, but I think it is unplausable that this will follow from any objective optimization principle. Nature didn’t need a decision theory to decide if it is a good idea to walk into a teleporter machine if two of us walk out the other side. We have our built in shabby biological decision theory, we can innovate on it theoretically, but there is no objective sense in which some particular decision theory will be the right one for us.
Totally agree, and I wish this opinion was voiced more on LW rather than the emphasis on trying to make a friendly self improving AI. For this to make sense though I think the human race needs to become a singleton, although perhaps that is what Google’s acquisitions and massive government surveillance is already doing.
On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.
One part is writing down whatever dreams I can remember right upon awaking. This has led to me occasionally experiencing lucid dreams without really trying.
Also since I am writing dreams anyway, this makes it easy to do the other writing which I find beneficial. Namely, writing the major plan of the day and gratitude stuff.
How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?
Are there things we should be doing now to take advantage of future technology. What I mean would be something like people who bank umbilical cord fluid for potential future stem cell usages. Another example would be if we had taken a lot of pictures of a historical building which is now gone, then we could use modern day photogrammetry to make a 3d model of it. A potential current example, suppose we recorded a ton of our day to day vocal communication. Then, some day in the future, a new machine learning algorithm could make use of the data. So what I am looking for is whether there are any potential ‘missed opportunity’ of this type we should be considering (posted similar question on futurology subreddit).
The idea of a persistent personal identity has no physical basis. I am not questioning consciousness only saying that the mental construct that there is an ownership to some particular sequence of conscious feelings over time is inconsistent with reality (as I would argue all the teleporter-type thought experiments show). So in my view all that matters is how much a certain entity X decides (or instinctually feels) it should care about some similar seeming later entity Y.
The problem is the mental construct of “I”. Yes we can’t help but believe that there is feeling, thinking, subjective experience etc. The problem is that our brain seems to naturally construct a concept of “I” which is a sort of owner of these subjective experiences that persists over time. This construct, while deeply engrained and probably useful, is not consistent with physical reality. This can be seen either with teleporter type thought experiments or to some extent with real life cases of brain trauma (for example in Oliver Sacks’s or Ramachandran’s books). Our brains’ care about protecting some potential future entities, which barring crazy technology or anthropic scenarios are easy to specify, but there is not going to be a coherent general principle to decide when we should count potential future entities as being us.
What do we really understand about the perception of time speeding up as we get older? Every time I have seen it brought up one of two explanations are given. Either time is speeding up because we have fewer novel experiences which, in turn, lead to fewer new memories being created. Then, supposedly, our feeling of time passing is dependent on how many new memories we have in a given time frame and so we feel time is speeding up.
The other explanation I have seen is that time speeds up because each new year is a smaller percentage of your life up to that point. For example, it is easier to distinguish a 2kg weight and 4kg weight than a 50kg weight and a 52kg weight. So the argument goes that a similar thing holds for our perception of time passing.
These arguments both feel sketchy to me. Is there a more rigorous investigation into this question?
Can we use the stock market itself as a useful prediction market in any way? For example can we get useful information about how long Moore’s law type growth in microprocessors will likely continue based on how much the market values certain companies? Or are there too many auxiliary factors, so that reverse engineering anything interesting from price information is hopeless?
What is the best way to handle police interaction in countries you don’t live in? In the US it is generally considered pretty wise to exercise your right to be silent extensively. Now obviously in some really corrupt places your just going to have to go along with whatever they want. But what about the different countries in Europe? My instinct would be to respectfully tell the officer I would like to call my embassy (and have that number with me!).
Is it useful to think about the difference between ‘physically possible’ i.e. obeying the laws of physics and possible to engineer? In computer science there is something like this. You have things which can’t be done on a turing machine (e.g. halting problem). But then you have things which we may never be able to arrange the atoms in the universe to do, such as large cases of NP-hard problems.
So what about in physics? I have seen the argument that if we set loose a paperclip maximizer on earth, then we might doom the rest of the observable universe. But maybe there is simply no sequence of steps that even a super brilliant AI could take to arrange matter in such a way as to say move 1000kg at 98% the speed of light. Anyway, I am curious if this kind of thinking is developed somewhere.
Is there any good nonfiction books on cryonics? All I could find is this one http://www.amazon.com/Freezing-People-Not-Easy-Adventures/dp/0762792957/ref=sr_1_1?ie=UTF8&qid=1441303378&sr=8-1&keywords=cryonics . I started to read it but it is more historical and autobiographical. Also, do you think there would be demand for a well researched book on cryonics for general audiences?
If we obtained a good understanding of the beginning of life and found that the odds of life occurring at some point in our universe was one in a million, then what exactly would follow from that. Sure the Fermi paradox would be settled, but would this give credence to multiverse/big world theories or does the fact that the information is anthropically biased tell us nothing at all? Finally, if we don’t have to suppose a multiverse to account for a vanishingly small probability of life, then wouldn’t it be surprising if there are not a lot of hugely improbable jumps in the forming of intelligent life?
Where can I find the most coherent anti-FOOM argument (outside of the FOOM debate)? [That is, I’m looking for arguments for the possibility of not having an intelligence explosion if we reach near human level AI, the other side is pretty well covered on LW.]
Do we know whether quantum mechanics could rule out acausal between partners outside eachother’s lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the ‘Free will theorem’ https://en.wikipedia.org/wiki/Free_will_theorem .
What is the current status on formalizing timeless decision theory? I am new to LW, and have a mathematics background and would like to work on decision theory (in the spirit of LW). However, all I can find is some old posts (2011) of Eliezer saying that write ups are in process, as well as a 120 page report by Eliezer from MIRI which is mostly discussing TDT in words as well as the related philosophical problems. Is there a formal self contained definition of TDT out there?