I guess that makes sense, but very rarely is there a post that appeals to EVERYONE. A better system would be for people to be able to seek out the content that interests them. If something doesn’t interest you, then you move on.
Double
Those are interesting questions! Perhaps you should make your own post instead of using mine to get more of an audience.
Expressing disapproval of both candidates by e.g. voting for Harambe makes sense, but I think that voting for bad policies is a bad move because “obvious” things aren’t obvious to many people, and voting for bad candidates (as opposed to joke candidates) makes their policies more mainstream and likely to be adopted by candidates with chances to win.
Why do you think my post is being shot down?
Florida Elections
AI safety research has been groping in the dark, and half-baked suggestions for new research directions are valuable. It isn’t as though we’ve made half of a safe AI. We haven’t started, and all we have are ideas.
I think that a problem with my solution is that how can the AI “understand” the behaviors and thought-processes of a “more powerful agent.” If you know what someone smarter than you would think then you are simply that smart. If we abstract the specific more-powerful-agent’s-thoughts away, then we are left with Kantian ethics, and we are back where we started, trying to put ethics/morals in the AI.
It’s a bit rude to call my idea so stupid that I must not have thought about it for more than five minutes, but thanks for your advice anyways. It is good advice.
We Need a Consolidated List of Bad AI Alignment Solutions
The AI Box:
A common idea is for the AI to be in a “box” where it can only interact through the world by talking to a human. This doesn’t work for a few reasons:
The AI would be able to convince the human to let it out.The human wouldn’t know the consequences of their actions as well as the AI.
Removing capabilities from the AI is not a good plan because the point is to create a useful AI. Importantly, the AI should be able to stop all dangerous AI from being created.
Thanks. That fits the first three criteria well, but there is still controversy about many of the results, so maybe not the fourth one yet.
This sentence is a HUGE RED FLAG: “it shattered my illusion that I mostly avoid thinking about class signals, and instead convinced me that pretty much everything I do from waking up in the morning to going to bed at night is a class signal.”
If signaling can explain everything, then it is in the same category as Freudian psychoanalysis—unfalsifiable and therefore useless.
The idea that signaling explains everything leads to the idea that “people who say that they don’t bother with signaling and don’t use the symbols available to them are REALLY just signaling that they are the kind of person who can afford to not care about signaling.”
This is not the conclusion of a respectable theory; this is mental gymnastics. Having a theory that can explain anything is identical to having no clue.
I’ll admit that this post is the extent of my knowledge of signaling, so others might have fleshed-out the theory to the point that it can make predictions, but this essay was too much representativeness heuristic and not enough evidence.
Come back! I don’t know what you are referencing!
Thanks for the response. Those are fair reasons. I should have contributed more.
The LessWrong community is big and some are in Florida. If anyone had interesting things to share about the election I wanted to encourage them to do so.