I don’t mind agreeing not to metagame, but I don’t see why metagaming is terrible. You may prefer an absence of metagaming, but that’s a subjective opinion, not a fact. Personally I would have thought that metagaming would make for a more interesting game, unless players start refusing to play more games after backstabbing everyone, but that would be frowned upon.
skeptical_lurker
Why is that discussion on google groups rather then LW?
Especially in light of the recent thread which seemed to conclude that Alcor is superior to CI I’ve been thinking about the discrepancy between Alcor membership fees and the cost of life insurance. Membership fees are a fixed rate independent of age/probability of death, while life insurance varies. This means that the (cost : likelihood of death) ratio is far higher for younger prospective cryonauts, and this triggers my sense of economic unfairness/inefficiency.
For instance, with data from Alcor, assuming neurosuspension and extra as I live in the UK:
Membership $1.7 per day
29 year old female
Insurance: $0.46 per day Membership to Insurance Ratio 3.7:1
62-year old male, universal life (cost cannot go up as you get older)
Insurance: $5.1 per day Membership to Insurance Ratio 0.33:1
Of course, Alcor can be regarded as a good cause, so membership fees are not a waste of money, however it’s probably not the optimal use.
This leads to two questions:
Why is it structured like this, so that younger members are effectively subsidizing older ones?
Are there sensible third options in terms of provider? Plastination could be an alternative, but does anyone actually offer plastination?
Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity (I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...).
Oddly enough, I am not interested in improving epistemic rationality right now, partially because I am already quite good at it. But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I’m sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive (unless they are talking to other rationalists). I think I am borderline asperger’s (again, like many people here) and optimizing social skills probably takes precedence over most other things.
I am currently doing a PhD in “absurdly simplistic computational modeling of the blatantly obvious” which better damn well have some signaling value. In my spare time, to stop my brain turning to mush, among other things I am writing a story which is sort of rationalist, in that some of the characters keep using science effectively even when the world is going crazy and the laws of physics seem to change dependent upon whether you believe in them. On the other hand, some of the characters are (a) heroes/heroines (b) awesomely successful (c) hippies on acid who do not believe in objective reality (not that I am implying that all hippies/people who use lsd are irrational). Maybe the point of the story is that you need more than just rationality? Or that some people are powerful because of rationality, while others have imagination, and that friendship combines their powers in a my little pony like fashion? Or maybe its all just an excuse for pretentious philosophy and psychic battles?
I agree that 10% chance of success is better than near zero, and furthermore I agree that expected utility maximization means that putting in a great deal of effort to achieve a positive outcome is wiser than saying “oh well, we’re doomed anyway, might as well party hard and make the most of the time we have left”. However, the question is whether, if FAI has a low probability of success, are other possibilities, e.g. tool AI a better option to pursue?
Excellent points, and of course it is situation dependent—if one makes erroneous predictions on archived forms of communication, e.g. these posts, then yes these predictions can come back to haunt you, but often, especially in non-archived communications, people will remember the correct predictions and forget the false ones. It should go without saying that I do not intend to be overconfident on LW—if I was going to be, then the last thing I would do is announce this intention! In a strange way, I seem to want to hold three different beliefs: 1) An accurate assessment of what will happen, for planning my own actions. 2) A confidant, stopping just short of arrogant, belief in my predictions for impressing non-rationalists. 3) An unshakeable belief in my own invincibility, so that psychosomatic effects keep me healthy.
Unfortunately, this kinda sounds like “I want to have multiple personality disorder”.
In certain situations, such as sporting events which do not involve betting, my confidence that (.65C3 -.35C1) < (.65C4-.35C2) is at most 10%. In these situations confidence is valued far more that epistemic rationality.
But remember that it’s not just your own rationality that benefits you.
Are you saying that improving epistemic rationality is important because it benefits others as well as myself? This is true, but there are many other forms of self-improvement that would also have knock-on effects that benefit others.
I have actually read most of the relevant sequences, epistemic rationality really isn’t low-hanging fruit anymore for me, although I wish I had known about cognitive biases years ago.
Ok, I see what you mean now. Yes, this is often true, but again, I am trying to be less preachy (at least IRL) about rationality—if someone believes in astrology, or faith healing, or reincarnation then: (a) their beliefs probably bring them comfort (b) Trying to persuade them is often like banging my head against a brick wall (c) even the notion that there can be such a thing as a correct fact, independent of subjective mental states is very threatening to some people and I don’t want to start pointless arguments
So unless they are acting irrationally in a way which harms other people, or they seem capable of having a sensible discussion, or I am drunk, I tend to leave them be.
Ok—although maybe I should stick it in its own thread?
I realize much of this has been said before.
Part 1 : AGI will come before FAI, because:
Complexity of algorithm design:
Intuitively, FAI seems orders of magnitude more complex than AGI. If I decided to start trying to program an AGI tomorrow, I would have ideas on how to start, and maybe even make a minuscule amount of progress. Ben Goertzel even has a (somewhat optimistic) roadmap for AGI in a decade. Meanwhile, afaik FAI is still stuck at the stage of lob’s theorem.
The fact that EY seems to be focusing on promoting rationality and writing (admittedly awesome) harry potter fanfiction seems to indicate that he doesn’t currently know how to write FAI (and nor does anyone else) otherwise he would be focusing on that now, and instead is planning for the long term.Computational complexity CEV requires modelling (and extrapolating) every human mind on the planet, while avoiding the creation of sentient entities. While modelling might be cheaper than ~10^17 flops per human due to short cuts, I doubt it’s going to come cheap. Randomly sampling a subset of humanity to extrapolate from, at least initially, could make this problem less severe. Furthermore, this can be partially circumvented by saying that the AI follows a specific utility function while bootstrapping to enough computing power to implement CEV, but then you have the problem of allowing it to bootstrap safely. Having to prove friendliness of each step in self-improvement strikes me as something that could also be costly. Finally, I get the impression that people are considering using Solomonoff induction. It’s uncomputable, and while I realize that there exist approximations, I would imagine that these would be extremely expensive to calculate anything non-trivial. Is there any reason for using SI for FAI more than AGI, e.g. something todo with provability about the programs actions?
Infeasibility of relinquishment. If you can’t convince Ben Goertzel that FAI is needed, even though he is familiar with the arguments and is an advisor to SIAI, you’re not going to get anywhere near a universal consensus on the matter. Furthermore, AI is increasingly being used in financial and possibly soon military applications, and so there are strong incentives to speed the development of AI. While these uses are unlikely to be full AGI, they could provide building blocks – I can imagine a plausible situation where an advanced AI that predict the stock exchange could easily be modified to be a universal predictor.
The most powerful incentive to speed up AI development is the sheer number of people who die every day, and the amount of negentropy lost in the case that the 2nd law of thermodynamics cannot be circumvented. Even if there could be a worldwide ban on non-provably safe AGI, work would still probably continue in secret by people who thought the benefits of an earlier singularity outweighed the risks, and/or were worried about ideologically opposed groups getting their first.Financial bootstrapping If you are ok with running a non-provably friendly AGI, then even in the early stages when, for example, your AI can write simple code or make reasonably accurate predictions but not speak English or make plans, you can use these to earn money, and buy more hardware/programmers. This seems to be part of the approach Ben is taking.
Coming in Part II: is there any alternative (and doing nothing is not an alternative! even if FAI is unlikely to work its better than giving up!)
Thanks for the advice, but I don’t actually want to have multiple personality disorder—I was just drawing an analogy.
I wasn’t aware of any actual practical implementations of SI. That link isn’t talking about SI, but it’s similar and really impressive. Something similar to Optimal Ordered Problem Solver Induction would sound like a sensible approach to formalizing induction.
I was talking about provably CEV implementing AI because there seems to be a consensus on LW that this is the correct approach to take.
P(provably CEV implementing AI | other FAI | AGI turns out to be friendly anyway | safe singularity for any other reason) is quite a lot higher than P(provably CEV implementing AI).
I assume most of what I said has already been said before because I’m sure intelligent people will have thought of it, I cannot recall having actually read discussions about most of the points (except for relinquishment), and I have read a lot of LW and related sites. Because of this, I don’t actually have the links to previous discussions of these points.
I’ll try to add more citations to future posts.
I’m already aware of that paper, but it seems to me that MC-AIXI is more similar to MC tree search than to SI. I’m quite impressed with the effectiveness of MC tree search for Go.
Ok, they both tree search over a space, whether it is the space of strategies or the space of programs. That does make sense.
I think my initial reaction to SI was very negative—even without the halting problem, simply testing every program of length<n is crazy. By comparison, I could imagine some kind of tree search, possibly weighted by heuristics, to be efficient.
We could be in a simulation where the programmer is in fact testing it on every program of length <n! Doesn’t seem so bad.
Isn’t it Kurzweil arguing that the singularity is going to happen in 2045 and who cares about confidence intervals?
Some of my friends actually leverage this into justification for buying sugar in pharmacies at a ridiculous markup. I confess to being aghast whenever this happens in my presence.
If one believes that the placebo effect is true, it still doesn’t justify this sort of activity when its possible to buy sugar from a supermarket, put it into pills and take it, because even if you know its a placebo, it still works
Furthermore, if placebos have no effect, then how do you explain the fact that “a placebo can reduce pain by both opioid and non-opioid mechanisms… In the first case, placebo analgesia is typically blocked by the opioid antagonist naloxone, whereas in the second case it is not.” This shows there are objectively measurable chemical changes taking place, and as has been said elsewhere, the brain does affect the body, the deleterious effects of chronic stress being the most obvious example.
Hey, I’ve been lurking on LW for ages, and I though this would be a good time to make my first post.
So, while the game is anonymous now, are we going to compare notes afterwards? I just want to know if I am playing the prisoner’s dilemma or the iterated prisoner’s dilemma.