Sort of a follow-up post here: http://lesswrong.com/r/discussion/lw/nqp/notes_on_the_safety_in_artificial_intelligence/
ignoranceprior
Archive.org copy (takes a few seconds to load)
It might be that downvote troll everyone keeps talking about. Eugine?
You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.
Has anyone here had success with the method of loci (memory palace)? I’ve seen it mentioned a few times on LW but I’m not sure where to start, or whether it’s worth investing time into.
A similar question is whether happiness and suffering are equally energy-efficient.
You can watch the archived videos here: http://livestream.com/nyu-tv/ethicsofAI
Yes, for cases of Gish gallop it would be impractical to refute every single point.
You could advertise this on /r/ControlProblem too.
What are good introductory books on chemistry and biology that do not require any background knowledge? I’m ashamed to say it, but I don’t really even have a high-school level knowledge of either subject, and what little I knew is now forgotten. My background in basic (classical) physics is much better, but I have forgotten some of that too.
I don’t know specifically. Where would be the best place to start?
Thank you very much!
Want to improve the wiki page on s-risk? I started it a few months ago but it could use some work.
The flip side of this idea is “cosmic rescue missions” (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic rescue missions are unlikely.
Also, there’s an argument that humanity conquering aliens civs would only be considered bad if you assume that either (1) we have non-universalist-consequentialist reasons to believe that preventing alien civilizations from existing is bad, or (2) the alien civilization would produce greater universalist-consequentialist value than human civilizations with the same resources. If (2) is the case, then humanity should actually be willing to sacrifice itself to let the aliens take over (like in the “utility monster” thought experiment), assuming that universalist consequentialism is true. If neither (1) nor (2) holds, then human civilization would have greater value than ET civilization. Seth Baum’s paper on universalist ethics and alien encounters goes into greater detail.
You might like this better:
Oh, in those cases, the considerations I mentioned don’t apply. But I still thought they were worth mentioning.
In Star Trek, the Federation has a “Prime Directive” against interfering with the development of alien civilizations.
And the concept is much older than that. The 2011 Felicifia post “A few dystopic future scenarios” by Brian Tomasik outlined many of the same considerations that FRI works on today (suffering simulations, etc.), and of course Brian has been blogging about risks of astronomical suffering since then. FRI itself was founded in 2013.
What would count as “LessWrong-esque”?
Some people in the EA community have already written a bit about this.
I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:
Effective Altruism, and building a better QALY
The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don’t know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.
Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don’t know if they have done anything since then.
Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog’s LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).
Source, since you didn’t link it.