Just for what it’s worth as a very belated reply—I was raised in a family of gentle and convenient religion, and would strongly support such a law, as well as outlawing advertisement targeted at children.
FeepingCreature
The second reason is invalid unless the actor is self-deluding—a smart actor that faces being put out of work would silently adopt a SPR as his decision-making system without admitting to it. Since the superiority of SPR continues in many fields, either relevant actors are consistently not smart, performance is not a significant contributing criterion to their success, or they’re self-deluding ie. overrating their own judgment as the poster stated. [edit] I’d guess a combination of the last two.
Similar thing happens on reddit. I think it’s widespread across vote-based sites. Any counterexamples?
The eminent philosophers of Monty Python said it best of all:
This video is no longer available because the uploader has closed their YouTube account.
Deep, man.
You should try googling ′ “evolutionary psychology” homosexuality’.
Very true.
However, my point was more that this doesn’t exactly conflict with evopsych either.
Of course, the question if evopsych actually does any useful predictions is still open. :)
Voldemort is the taken name of the main antagonist of the popular fantasy book series Harry Potter.
Eliezer Yudkowsky, one of the founders and main writers for lesswrong.com, also writes a Harry Potter fanfiction, called Harry Potter and the Methods of Rationality. (HPATMOR)
Because of this, several accounts on this forum are references to Harry Potter characters.
[edit] Vol de mort is also french for Flight of Death.
The interesting question is: “do universes exist with a higher computational capacity than ours? How much higher? Orders of magnitude higher? Degrees of infinity higher? Arbitrarily higher? ”
I’d guess because pain has to be immediate to be of value, so the more processing you heap on it the less useful it becomes; and species tend to evolve pain before they evolve utility-judging systems.
Please define “a lot”; it’s subjective.
The Three Laws are most decidedly not safe, and in fact, should be discarded and discredited. The first law in specific, “do not allow through inaction a human to come to harm”, can be trivially interpreted in various bad-end ways. Read The Metamorphosis of Prime Intellect for a fictional sample.
We have dealt with TotC by imposing costs larger than the benefits that could be derived from abusing the commons.
The benefits an AI could derive from abusing the commons are possibly unlimited.
Talk about Streisand Effect.
Regardless, a million is a constant factor. Sufficient self-reinforcing development (as is kind of the point of seed AI) can outstrip any such factor. And the more self-reinforced the development of our AI pool becomes, the less relevant are “mere” constant factors.
I’m not saying it won’t work, but I wouldn’t like to bet on it.
Semi-rare poster. I was almost two-hundred years off. I think it might be the latin title that throws people.
Belatedly.
“For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated”
Hold on. Motivated by what? If its objectives are only implicit in the structure, then why would these objectives include their self-preservation?
I already exist. I prefer to adopt a ruleset that will favor me continuing to existing. Adopting a theory that does not put disutility on me being replaced with a different human would be very disingenuous of me. Advocating the creation of an authority that does not put disutility on me being replaced with a different human would also be disingenuous.
For spreading your moral theory, you need the support of people who live, not people who may live. Thus, your moral theory must favor their interests.
[edit] Is this metautilitarianism?
Which is not necessarily a bad choice for you!
Very few people are trying to genuinely chose the most good for the most people; they’re trying to improve their group status by signalling social supportiveness. There’s no point to that if your group will be replaced; even suicide bombers require the promise of life after death or rewards for their family.
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
What if you construct more than one cake, then arrange distribution so everybody gets a bigger piece than somebody else on at least one cake. Thus, because of human tendency to emphasize what makes them feel good, people notice their privileged cake(s) and disregard their loss cake(s).
A real-world equivalent would be the religious concept of poorness as a virtue.