The weirdest part about “an optimization demon” is “this is our measure of good (outcomes), but don’t push to hard towards it and you’ll get something bad”, when intuitively something that is optimizing at our expense would have a harder time meeting stricter constraints.
The reasoning behind it is a) us and b) everything we call brains, being the result of “pushing too hard”. It’s not immediately clear how a “semi-optimization demon” would come to be, or what that would mean.
It’s also not clear how when and how you’d have the issue aside from running a genetic algorithm for ages.
The title and the question seem fairly different.
Only 20% of LessWrong participants active enough to fill out a survey have ever written a post.
If you accidentally post a draft, which is very easy to do, you lose a lot of karma.
OpenAI’s “safety” move (not releasing the model) reduces the scrutiny it can receive, which makes its impact on forecasts conditional on how good you think it is, when you haven’t seen it.
there’s a “non-identity problem” type thing about whether we can harm future agents by setting up the memetic environment such that they’ll end up having less easily satisfiable goals, compared to an alternative where they’d find themselves in larger agreement and therefore with more easily satisfiable goals
I hadn’t heard of that before, I’m glad you mentioned it. Your comment (as a whole) was both interesting/insightful/etc. and long, and I’d be interested in reading any future posts you make.
I think the least repugnant aspect of a perfect moral theory* to sacrifice might be simplicity, the way you mean it. (Though intuitively, a lot of conditions would have to be met for that to seem a reasonable move to make, personally.)
I’m not clear on how “moral undefinability” would look different from “defining morality is hard”.
*General moral theory.
(Btw. the title made me think a) of the claim that gratitude journaling works and b) made me wonder if CBT is associated with that practice.)
There is no limit to how many different posts and comments one can do this to. In this sense there is an unlimited supply of karma to be handed out.
So infinite posts * 1 sock puppet = infinite karma.
One cannot get high karma by producing a small amount of content that a small number of users likes a lot.
Aside from the fact that both posts and comments can be upvoted, there’s double upvoting (though I’m not sure how that is calculateed from one’s karma) so:
One can get high karma from a small amount of content that a small number of sufficiently high karma users that double up vote it. (Though sequence length may be rewarded more than brevity, and while there may be a loose correlation (longer sequence requires more time) we might suppose there is a correlation going the other way—more time is required to make what would otherwise be longer posts shorter, and the same may be said of sequences.)
How was this posted in 2019, with comments from 2018?
OTTOMH—Off the top of my head
I think there’s a lot more to be gained from using the necessary technology and resources in other ways. Sure, you could try to prevent people from robbing banks by nuking banks if someone tries to rob them—but it’s a serious waste of resources.
A more important question is What is the rate of progress? How fast is the world getting better? (With the answer being a negative number if it is getting worse.)
If identity shifts are good, can an identity shift to an unchanging state be bad?
Suppose we are open to ideas for a reason.* Then we would need a greater reason still, to not be so.
*This practice is associated with an idea about ideas, and might be applied only to lesser ideas. (Or apply with a degree inversely proportional to idea level. For instance, to prove that all actions are equally useful requires much more evidence, than to prove that one action is more than/less than/equal in value to another.)
This comment seems identical to one here (by the same person).
No, both were comments, one on a question, and the one I linked to was a comment on an answer. As the author retracted the one above, it seemed reasonable they might wish to do the same with a duplicate.
As they say, an ounce of prevention is worth a pound of cure.
This comment seems identical to another one here. (I am also curious about what caused this change in belief.)
What makes “conversion” different from “deconversion”? (Aside from a life of Pi scenario where someone is converted to 3 religions.)