I don’t actually know that separate agree/disagree and low/high quality buttons will be all that helpful. I don’t know that I personally can tell the difference very well.
Hardly any potential catastrophies actually occur. If you only plan for the ones that actually occur (say, by waiting until they happen, or by flawlessly predicting the future), then you save a lot of mental effort.
Also, consider the difference between potential and actual catastrophe regarding how willing you will be to make a desperate effort to find the best solution.
I don’t know about that, denis. The first part at least is a cute take on the “shut up and multiply” principle.
By my math it should be impossible to faithfully serve your overt purpose while making any moves to further your ulterior goal. It has been said that you can only maximize one variable; if you consider factor A when making your choices, you will not fully optimize for factor B.
So I guess Lord Administrator Akon remains anesthetized until the sun roasts him to death? I can’t decide if that’s tragic or merciful, that he never found out how the story ended.
Anonymous: The blog is shutting down anyway, or at least receding to a diminished state. The threat of death holds no power over a suicidal man...
Personally, I side with the Hamburgereaters. It’s just that the Babyeaters are at the very least sympathetic, I can see viewing them as people. As they’ve said, the Babyeaters even make art!
I agree with the President of Huygens; the Babyeaters seem much nicer than the Lotuseaters. Maybe that’s just because they don’t physically have the ability to impose their values on us, though.
“Normal” End? I don’t know what sort of visual novels you’ve been reading, but it’s rare to see a Bad End worse than the death of humanity.
Why do you consider a possible AI person’s feelings morally relevant? It seems like you’re making an unjustified leap of faith from “is sentient” to “matters”. I would be a bit surprised to learn, for example, that pigs do not have subjective experience, but I go ahead and eat pork anyway, because I don’t care about slaughtering pigs and I don’t think it’s right to care about slaughtering pigs. I would be a little put off by the prospect of slaughtering humans for their meat, though. What makes you instinctively put your AI in the “human” category rather than the “pig” category?
“It’s not like we’re born seeing little ‘human’ tags hovering over objects, with high priority attached. ”
Aren’t we though? I am not a cognitive scientist, but I was under the impression that recognizing people specifically was basically hardwired into the human brain.
Putting randomness in your algorithms is only useful when there are second-order effects, when somehow reality changes based on the content of your algorithm in some way other than you executing your algorith. We see this in Rock-Paper-Scissors, where you use randomness to keep your opponent from predicting your moves based on learning your algorithm.
Barring these second order effects, it should be plain that randomness can’t be the best strategy, or at least that there’s a non-random strategy that’s just as good. By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.
I don’t know that it’s that impressive. If we launch a pinball in a pinball machine, we may have a devil of a time calculating the path off all the bumpers, but we know that the pinball is going to wind up fallin in the hole in the middle. Is gravity really such a genius?
So… do you not actually believe in your injunction to “shut up and multiply”? Because for some time now you seem to have been arguing that we should do what feels right rather than trying to figure out what is right.
If we see that adhering to ethics in the past has wound up providing us with utility, the correct course of action is not to throw out the idea of maximizing our utility, but rather to use adherence to ethics as an integral part of our utility maximization strategy.
Isn’t the scientific method a servant of the Light Side, even if it is occasionally a little misguided?
Ian C: Where on earth do you live that people keep what they earn and there’s no public charity?
Richard: Humans are pretty cool, I’m down.
It is in any case a good general heuristic to never do anything that people would still be upset about twenty years later.
It’s amazing how many lies go undetected because people simply don’t care. I can’t tell a lie to fool God, but I can certainly achieve my aims by telling even blatant, obvious lies to human beings, who rarely bother trying to sort out the lies and when they do aren’t very good at it.
It sounds to me like you’re overreaching for a pragmatic reason not to lie, when you either need to admit that honesty is an end in itself or admit that lies are useful.
The thing is, an AI doesn’t have to use mental tricks to compensate for known errors in its reasoning, it can just correct those errors. An AI never winds up in the position of having to strive to defeat its own purposes.