Sort of a follow-up post here: http://lesswrong.com/r/discussion/lw/nqp/notes_on_the_safety_in_artificial_intelligence/
ignoranceprior
Archive.org copy (takes a few seconds to load)
[Link] NYU conference: Ethics of Artificial Intelligence (October 14-15)
It might be that downvote troll everyone keeps talking about. Eugine?
You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.
UC Berkeley launches Center for Human-Compatible Artificial Intelligence
Has anyone here had success with the method of loci (memory palace)? I’ve seen it mentioned a few times on LW but I’m not sure where to start, or whether it’s worth investing time into.
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
A similar question is whether happiness and suffering are equally energy-efficient.
You can watch the archived videos here: http://livestream.com/nyu-tv/ethicsofAI
The Leverhulme Centre for the Future of Intelligence officially launches.
Yes, for cases of Gish gallop it would be impractical to refute every single point.
You could advertise this on /r/ControlProblem too.
David Chalmers on LessWrong and the rationalist community (from his reddit AMA)
What are good introductory books on chemistry and biology that do not require any background knowledge? I’m ashamed to say it, but I don’t really even have a high-school level knowledge of either subject, and what little I knew is now forgotten. My background in basic (classical) physics is much better, but I have forgotten some of that too.
I don’t know specifically. Where would be the best place to start?
Thank you very much!
Want to improve the wiki page on s-risk? I started it a few months ago but it could use some work.
The flip side of this idea is “cosmic rescue missions” (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic rescue missions are unlikely.
Also, there’s an argument that humanity conquering aliens civs would only be considered bad if you assume that either (1) we have non-universalist-consequentialist reasons to believe that preventing alien civilizations from existing is bad, or (2) the alien civilization would produce greater universalist-consequentialist value than human civilizations with the same resources. If (2) is the case, then humanity should actually be willing to sacrifice itself to let the aliens take over (like in the “utility monster” thought experiment), assuming that universalist consequentialism is true. If neither (1) nor (2) holds, then human civilization would have greater value than ET civilization. Seth Baum’s paper on universalist ethics and alien encounters goes into greater detail.
Source, since you didn’t link it.