DM me anything
(I apologize for being, or skirting too close to the edges of being, too political. I accept downvotes as the fair price and promise no begrudgement for it.)
I have an observation that I want more widely appreciated by low-contextualizers (who may be high or low in decoupling as well; they are independent axes): insisting that conversations happen purely in terms of the bet-resolvable portion of reality, without an omniscient being to help out as bet arbiter, can be frame control.
Status quos contain self-validating reductions, and people looking to score Pragmatic Paternalist status points can frame predictable bet outcomes as vindication of complacence with arbitrary, unreasonably and bullyishly exercised, often violent, vastly intrinsic-value-sacrificial power, on the basis of the weirdness and demonstrably inconvenient political ambitiousness of fixing the situation.
They seem to think, out of entitlement to epistemic propriety, that there must be some amount of non-[philosophical-arguments]-based evidence that should discourage a person from trying to resolve vastly objectively evil situations that neither the laws of physics, nor any other [human-will]-independent laws of nature, require or forbid. They are mistaken.
If that sounds too much like an argument for communism, get over it; I love free markets and making Warren Buffett the Chairman of America is no priority of mine.
If it sounds too much like an argument for denying biological realities, get over it; I’m not asking for total equality, I’m just asking for moral competence on behalf of institutions and individuals with respect to biological realities, and I detest censorship of all the typical victims, though I make exception for genuine infohazards.
If you think my standards are too high for humanity, were Benjamin Lay’s also too high? I think his efforts paid off even if our world is still not perfect; I would like to have a comparable effect, were I not occupied with learning statistics so that I can help align AI for this guilty species.
If you think factory farmed animals have things worse than children… Yes. But I am alienated by EA’s relative quietude; you may not see it this way, but so-called lip service is an invitation for privately conducted accountability negotiation, and I value that immensely as a foundation for change.
Engineering and gaming are just other words for understanding the constraints deeply enough to find the paths to desired (by the engineer) results.
The words you choose are political, with embedded intentional beliefs, not definitional and objective about the actions themselves.
Well now that was out of left-field! People don’t normally say that without having a broader disagreement at play. I suppose you have a more-objective reform-to-my-words prepared to offer me? My point about the letter of the law being more superficial than the spirit seems like a robust observation, and I think my choice of words accurately, impartially, and non-misleadingly preserves that observation;
until you have a specific argument against the objectivity, your response amounts to an ambiguously adversarially-worded request to imagine I was systematically wrong and report back my change of mind. I would like you to point my imagination in a promising direction; a direction that seems promising for producing a shift in belief.
Funny that you think gameability is closer to engineering; I had it in mind that exceptioncraft was closer. To my mind, gameability is more like rules-lawyering the letter of the law, whereas exceptioncraft relies on the spirit of the law. Syntactic vs semantic kinda situation.
Arbitrary incompleteness invites gameability, and arbitrary specificity invites exceptioncraft.
You can quote text using a caret (>) and a space.
Surely to be truthful is to be non-misleading...?
Read the linked post; this is not so. You can mislead with the truth. You can speak a wholly true collection of facts that misleads people. If someone misleads using a fully true collection of facts, saying they spoke untruthfully is confusing. Truth cannot just always lead to good inferences; truth does not have to be convenient, as you say in OP. Truth can make you infer falsehoods.
Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it’s like saying bananas are more fruity to you than fruits.
Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?
The existence of natural abstractions is entirely compatible with the existence of language games. There are correct and incorrect ways to play language games.
Dialogue trees are the substrate of language games, and broader reality is the substrate of dialogue trees. Dialogue trees afford taking dialogical moves that are more or less arbitrary. A guy who goes around saying “claiming land for yourself and enforcing your claim is justice; Nozick is intelligent and his entitlement theory of justice vindicates my claim” will leave exact impressions on exact types of people, who will in turn respond in ways that are characteristic of themselves. Every branch of the dialogue tree will leave an audience with an impression of who is right, and some audiences have measurably better calibration.
Just because no one can draw perfect triangles doesn’t mean it’s nonsense to talk about such things.
In the sequences, Yudkowsky has remarked over and over that it is futile to protest that you acted with propriety if you do not achieve the correct answer; read the 12th virtue
No; pointless for me to complain, to be clear.
The Principle of Nameless Heartsmarts: It is pointless to complain that I acted with propriety if in the end I was too dense to any relevant consideration.
You can’t say values “aren’t objective” without some semantic sense of objectivity that they are failing to fulfill.
If you can communicate such a sense to me, I can give you values to match. That doesn’t mean your sense of objectivity will have been perfect and unarbitrary; perhaps I will want to reconcile with you about our different notions of objectivity.
Still, I’m damn going to try to be objectively good.
It just so happens that my values connote all of your values, minus the part about being culturally local; funny how that works.
If you explicitly tell me that your terminal values require culturally local connotations then I can infer you would have been equally happy with different values had you been born in a different time or place. I would like to think that my conscience is like that of Sejong the Great’s and Benjamin Lay’s: relatively less dependent on my culture’s sticks and carrots.
The dictionary defines arbitrary as:
based on random choice or personal whim, rather than any reason or system
The more considerate and reasoned your choice, the less random it is. If the truth is that your way of being considerate and systematic isn’t as good as it could have been, that truth is systematic and not magical. The reason for the non-maximal goodness of your policy is a reason you did not consider. The less considerate, the more arbitrary.
There is no real reason to choose either the left or right side of the road for driving but it’s very useful to choose either of them.
Actually there are real reasons to choose left or right when designing your policy; you can appeal to human psychology; human psychology does not treat left and right exactly the same.
If one person says I don’t really need that many error codes, I don’t want to follow arbitrary choices and send 44 instead of 404, this creates a mess for everyone who expects the standard to be followed.
If the mess created for everyone else truly outweighs the goodness of choosing 44, then it is arbitrary to prefer 44. You cannot make true arbitrariness truly strategic just by calling it so; there are facts of the matter besides your stereotypes. People using the word “arbitrary” to refer to something that is based on greater consideration quality are wrong by your dictionary definition and the true definition as well.
You are wrong in your conception of arbitrariness as being all-or-nothing; there are varying degrees, just as there are varying degrees of efficiency between chess players. A chess player, Bob, half as efficient as Kasparov, makes a lower-quality sum of considerations; not following Kasparov’s advice is arbitrary unless Bob can know somehow that he made better considerations in this case;
maybe Bob studied Kasparov’s biases carefully by attending to the common themes of his blunders, and the advice he’s receiving for this exact move looks a lot like a case where Kasparov would blunder. Perhaps in such a case Bob will be wrong and his disobedience will be arbitrary on net, but the disobedience in that case will be a lot less arbitrary than all his other opportunities to disobey Kasparov.
A policy that could be better — could be more good — is arbitrarily bad. In fact the phrase “arbitrarily bad” is redundant; you can just say “arbitrary.”
It is better to be predictably good than surprisingly bad, and it is better to be surprisingly good than predictably bad; that much will be obvious to everyone.
I think it is better to be surprisingly good than predictably good, and it is better to be predictably bad than surprisingly bad. EDIT: wait, I’m not sure that’s right even by deontology’s standards; as a general categorical imperative, if you can predict something will be bad, you should do something surprisingly good instead, even if the predictability of the badness supposedly makes it easier for others to handle. No amount of predictable badness is easier for others to handle than surprising goodness.
EDIT EDIT: I find the implication that we can only choose between predictable badness and surprising badness to be very rarely true, but when it is true then perhaps we should choose to be predictable. Inevitably, people with more intelligence will keep conflicting with people with less intelligence about this; less intelligent people will keep seeing situations as choices between predictable badness and surprising badness, and more intelligent people will keep seeing situations as choices between predictable badness and surprising goodness.
Focusing on predictability is a strategy for people who are trying to minimize their expectedly inevitable badness. Focusing on goodness is a strategy for people who are trying to secure their expectedly inevitable weirdness.
I don’t yet have any opinions about the arbitrariness of those rules. It is possible that I would disagree with you about the arbitrariness if I was more familiar.
Still, you claim that those rules are arbitrary and then defend them; what on Earth is the point of that? If you know they are arbitrary then you must know there are, in principle, less arbitrary policies available. Either you have a specific policy that you know is less arbitrary, in which case people should coordinate around that policy instead as a matter of objective fact, or you don’t know a specific less arbitrary policy, and in that case maybe you want people with better Strategic Goodness about those topics to come up with a better policy for you that people should coordinate around instead.
You can complain about the inconvenience of improving, sure. But the improvement will be highly convenient for some other people. There’s only so long you can complain about the inconvenience of improving before you’re a cost-benefit-dishonest asshole and also people start noticing that fact about you.
Either ‘fallacious’ is not the true problem or it is the true problem but the stereotypes about what is fallacious do not align with reality: A Unifying Theory in Defense of Logical Fallacies
People defend normal rules by saying they’re “not arbitrary.” But if they were arbitrariness minimizers the rules would certainly be different. Why should I tolerate an arbitrary level of arbitrariness when I can have minimal instead?
Your policy’s non-maximal arbitrariness is not an excuse for its remaining arbitrariness.
I do not suggest the absence of a policy if such an absence would be more arbitrary than the existing policy. All I want is a minimally arbitrary policy; that often implies replacing existing rules rather than simply doing away with them. Sometimes it does mean doing away with them.
If someone said “you’ll never persuade people like that” to me I’d probably just ask them what’s arbitrary about my position. If it’s arbitrary then they may have a point. If it’s not arbitrary then people will in fact be persuaded.
When I try to do virtue ethics, I find that all my virtues turn to swiss cheese after a day’s worth of exception handling.
“Put simply: inconsistency between words and actions is no big deal. Why should your best estimate about good strategies be anchored to what you’re already doing? The anti-hypocrisy norm seems to implicitly assume we’re already perfect; it leaves no room for people who are in the process of trying to improve.”
— Abram Demski, Hufflepuff Cynicism on Hypocrisy
”With ‘unlimited power’ you have no need to crush your enemies. You have no moral defense if you treat your enemies with less than the utmost consideration.
With ‘unlimited power’ you cannot plead the necessity of monitoring or restraining others so that they do not rebel against you. If you do such a thing, you are simply a tyrant who enjoys power, and not a defender of the people.
Unlimited power removes a lot of moral defenses, really. You can’t say ‘But I had to.’ You can’t say ‘Well, I wanted to help, but I couldn’t.’ The only excuse for not helping is if you shouldn’t, which is harder to establish.
You cannot take refuge in the necessity of anything—that is the meaning of unlimited power.”
— Eliezer Yudkowsky, Not Taking Over the World
If AI copied all human body layouts down to the subatomic level, then re-engineered all human bodies so they were no longer recognizably human but rather something human-objectively superior, then gave all former humans the option to change back to their original forms, would this have been a good thing to do?
I think so!
It has been warned in ominous tones that “nothing human survives into the far future.”
I’m not sure human-objectivity permits humanity to remain mostly-recognizably human, but it does require that former humans have the freedom to change back if they wish, and I’m sure that many would, and that would satisfy the criterion of something human surviving the far future.