I don’t think decreasing existential risk falls into it, because the probability of an existential catastrophe isn’t extremely small. One survey taken at Oxford predicted that there was a ~19% chance of human extinction prior to 2100. Determining the probability of existential catastrophe is very challenging and the aforementioned statistic should be viewed skeptically, but a probability anywhere near 19% would still (as far as I can tell) prevent to from falling prey to Pascal’s mugging.
3p1cd3m0n
I see what you mean. I don’t really know enough about Pascal’s mugging to determine whether decreasing existential risk be 1 millionth of a percent is worth it, but it’s a moot point, as it seems reasonable that existential risk could be reduced by far more than 1 millionth of one percent.
Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I’ve found none, but I’d like to know because I’m considering developing AGI as a career.
Edit: What about AI that’s not AGI?
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?
Is there any justification for the leverage penalty? I understand that it would apply if there were a finite number of agents, but if there’s an infinite number of agents, couldn’t all agents have an effect on an arbitrarily larger number of other agents? Shouldn’t the prior probability instead be P(event A | n agents will be effected) = (1 / n) + P(there being infinite entities)? If this is the case, then it seems the leverage penalty won’t stop one from being mugged.
How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 − 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?
For one, Yudkowsky in Artificial Intelligence as a Positive and Negative Factor in Global Risk says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven’t thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.
Then this solution just assumes the possibility of infinite people is 0. If this solution is based on premises that are probably false, then how is it a solution at all? I understand that infinity makes even bigger problems, so we should instead just call your solution a pseudo-solution-that’s-probably-false-but-is—still-the-best-one-we-have, and dedicate more efforts to finding a real solution.
What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn’t advanced far enough to be able to tell if other species have floating beliefs or not.
Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.
What evidence is there for mice being unable to think about thinking? Due to the communication issues, mice can’t say if they can think about thinking or not.
What makes you think that the argument you just said was generated by you for a reason, instead of for no reason at all?
Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the “base” entities. Far-fetched, maybe, but not completely implausible.
Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would that look like?
“Mystery, and the joy of finding out, is either a personal thing, or it doesn’t exist at all—and I prefer to say it’s personal.” I don’t see why this is the case. Can’t one only have joy from finding out what no one in the Solar System knows? That way, one can still have joy, but it’s still not personal.
If I understand Solomonoff Induction correctly, for all n and p the the sum of the probabilities of all the hypotheses of length n equals the sum of the probabilities of hypotheses of length p. If this is the case, what normalization constant could you possibility use to make all the probabilities sum to one? It seems there are none.
Is there any justification that Solomonoff Induction is accurate, other than intuition?
If I understand correctly, Yudkowsky finds philosophical zombies to be implausible, as it would require consciousness to have no causal influence on reality, which Yudkowsky seems to believe entails that if there are philosophical zombies, it’s purely coincidental that accurate discussions of consciousness are done by those who are conscious, which is very improbable and thus philosophical zombies are very implausible. This reasoning seems flawed, as discussing and thinking about consciousness could cause consciousness to exist, but this consciousness would have no effect on anything else. For philosophical zombies to exist, thinking about consciousness could only bring about consciousness in certain substrates.
Unfortunately Chaitin’s Omega’s incomputable, but even if it wasn’t I don’t see how it would work as a normalizing constant. Chaitin’s Omega is a real number, there is an infinite number of hypotheses, and (IIRC) there is no real number r such that r multiplied by infinite equals one, so I don’t see how Chaitin’s Omega could possible work as a normalizing constant.
“Of course” implies that the answer is obvious. Why is it obvious?
For many utility functions, I think donating to an organisation working on decreasing existential risk would be incredibly efficient, as: