Downvoting is temporarily disabled! I’m very excited about this change because in the last few weeks I’ve seen some good conversations deleted by someone exploiting a sockpuppet glitch. Besides, I have always preferred commenting to downvoting.
scarcegreengrass
DeepMind article: AI Safety Gridworlds
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off ‘comment score below threshold’ hiding, etc)
Epistemics: Yes, it is sound. Not because of claims (they seem more like opinions to me), but because it is appropriately charitable to those that disagree with Paul, and tries hard to open up avenues of mutual understanding.
Valuable: Yes. It provides new third paradigms that bring clarity to people with different views. Very creative, good suggestions.
Should it be in the Best list?: No. It is from the middle of a conversation, and would be difficult to understand if you haven’t read a lot about the ‘Foom debate’.
Improved: The same concepts rewritten for a less-familiar audience would be valuable. Or at least with links to some of the background (definitions of AGI, detailed examples of what fast takeoff might look like and arguments for its plausibility).
Followup: More posts thoughtfully describing positions for and against, etc. Presumably these exist, but i personally have not read much of this discussion in the 2018-2019 era.
I found this uncomfortable and unpleasant to read, but i’m nevertheless glad i read it. Thanks for posting.
I took the survey. It’s probably my favorite survey of each year :) Thanks.
Mysterious Go Master Blitzes Competition, Rattles Game Community
FYI: Here is the RSS link
Completion Estimates
Barack Obama’s opinions on near-future AI [Fixed]
I have similar uncertainty about the large-scale benefits of lesswrong.com, but on smaller scales i do think the site was very valuable. I’ve never seen a discussion forum as polite, detailed, charitable, & rigorous as the old Less Wrong.
I’m a left-libertarian and i mostly disagree with this comment, but i upvoted it because it’s very clear and respectful.
I agree that politics discussions are better suited for other rationality-sphere sites, not LW.
Excellent point. We essentially have 4 quadrants of computational systems:
Looks nonhuman, internally nonhuman—All traditional software is in this category
Looks nonhuman, internally humanoid—Future minds that are at risk for abuse (IMO)
Looks humanoid, internally nonhuman—Not a ethical concern, but people are likely to make wrong judgments about such programs.
Looks humanoid, internally humanoid—Humans. The blogger claims LaMDA also falls into this category.
Great post. I encountered many new ideas here.
One point confuses me. Maybe I’m missing something. Once the consequentialists in a simulation are contemplating the possibility of simulation, how would they arrive at any useful strategy? They can manipulate the locations that are likely to be the output/measurement of the simulation, but manipulate to what values? They know basically nothing about how the input will be interpreted, what question the simulator is asking, or what universe is doing the simulation. Since their universe is very simple, presumably many simulators are running identical copies of them, with different manipulation strategies being appropriate for each. My understanding of this sounds less like malign and more like blindly mischievous.
TLDR How do the consequentialists guess which direction to bias the output towards?
This is a little nitpicky, but i feel compelled to point out that the brain in the ‘human safety’ example doesn’t have to run for a billion years consecutively. If the goal is to provide consistent moral guidance, the brain can set things up so that it stores a canonical copy of itself in long-term storage, runs for 30 days, then hands off control to another version of itself, loaded from the canonical copy. Every 30 days control is handed to a instance of the canonical version of this person. The same scheme is possible for a group of people.
But this is a nitpick, because i agree that there are probably weird situations in the universe where even the wisest human groups would choose bad outcomes given absolute power for a short time.
Favorite highlight:
‘Likewise, great literature is typically an integrated, multi-dimensional depiction. While there is a great deal of compression, the author is still trying to report how things might really have happened, to satisfy their own sense of artistic taste for plausibility or verisimilitude. Thus, we should expect that great literature is often an honest, highly informative account of everything except what the author meant to put into it.’
Fascinating paper!
I found Sandberg’s ‘popular summary’ of this paper useful too: http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/
This is not news but i would use this site a lot more if there was a little less downvoting. Is the bottleneck on this computer programmers or coordinators?
Well, it sounds like their dietary requirements would prevent that. Of course, if it’s possible for someone to design a symbiotic system that outputs those four amino acids, then there could be trouble. Hopefully that’s not feasible.
((past-tense take) i survey)