AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
”The only good description is a self-referential description, just like this one.”
momom2
I strongly relate to that, although I have the opposite issue.
As someone with very easy aural imagination, I can very easily imagine tunes, and occasionally harmonies with a bit of effort.
There is a very clear distinction between what I constantly imagine and sometimes the perfectly clear music that I hear in my head.I used to try doing that a lot, but whenever I noticed the imagination shifting to actual music, I’d lose my focus, which is a shame because I’ve tried reproducing this experience purposefully several times to no avail.
Hum, in this analogy, it seems likely that she could answer :
”Your analogy is correct in that these groups are no more similar to mine that a monkey is similar to a human.
Your analogy is incorrect in that my group is fundamentally different in origin to the other groups. The word/ intent [depending on the confession] of God does not change.”
I think the world is moral as meaning : “sometimes, morality is applicable”. Some things are good (though perhaps questionably so) and some are evil (idem). Perhaps not all things are categorizable into good evil categories (for example, ice cream is delivious), and these categories might shift over time, space and culture (for example, slavery is bad) but there is some sort of measure.
Most people agree that murder is evil. When saying “Murder is evil.” and trying to dereference my pointers, I come up with something like “I should not murder.” “Someone should not murder.” “Murder is something that shouldn’t be.” “Murder is something I should ensure does not happen.” “Murder is something I don’t want to know about.”
I try to keep all my pointers towards testable possibilities. None of these correspond to the essence of evil as I feel it.One could argue in this case that my concept of morality is pointless/flawed but I have trouble getting to terms with that.
I don’t know why murder is bad. I can’t judge whether murder is bad by myself because I’m not knowledgeable enough in morality (which is the main reason I wrote this post) but even if I were an expert it’s doubtful I would be able to tell, seeing how experts disagree on the specifics/border cases of the evilness of murder. I’m not even sure the reasons are accessible to the human mind !
I can know that murder is bad pretty clearly however, because of feelings, rewards/punishments, education, reason, etc...
There are compelling arguments for other actions that don’t involve God’s edicts. For example, I have compelling reasons to study hard, although I don’t think it’s particularly moral. But when things are moral, resolving them correctly immediately becomes that much important : preventing murder is typically more important than studying hard. There is probably some degree of morality involved in studying (I must do my best ? I must give myself the ability to be better ?) but not as much as in a matter of life and death.
Indeed that is what I meant. I think the amorality of the world is even less likely than the inexistence of God, which is why I figured I should look into that first, but feel free to explain how the world could be amoral.
Thanks for the greeting ! Since theism is by far the most obvious discrepancy between my opinions and the community’s, I figured I should clear that up as soon as possible.
When I don’t think specifically about it, I just don’t have opinions. I usually feel that morality is a thing, but most of the time I don’t think about what morality is.
Likewise, I’m no expert of christian dogma. I weakly feel that I must not take the Genesis literally, and I strongly feel that talking snakes don’t exist. In general, I weakly feel that [whatever the Church says about it] just like if you ask me about AI, I’ll answer [whatever Yudkowsky wrote].All in all, the discussion so far has made pretty clear that I should taboo the word “morality” in my upcoming post...
I’d say morality is something the world has ? In the context where I used it above, that’s what I meant by “a moral world” : morality is taken as a property of the world, that pervades its components be them actions (murdering is bad), objects (murder is bad) and people (murderers are bad). These three sentences make sense to me, but they don’t designate the same kind of bad.
Although I have no right to claim that someone is irredeemably evil (Hitler might have done something right), I could condemn a specific action (Chauvin shouldn’t have killed Floyd).I’m not sure about groups of people. I guess you could judge an ideology and judge the group of people who follow that ideology ? That does not sound very helpful, because it’s a weak judgement of every individual, which begs the question of their individual morality.
Although it could probably be used as a useful judging heuristic (this group of people is good, so its members are likely to be good), I don’t see how to reach this conclusion without evaluating many members.
When I said that about feelings, I meant that it was my everyday tool to distinguish good from bad, just like my everyday tool to evaluate the correctedness of a mathematical demonstration is “is the result coherent and interesting ?”. It is merely an indicator, and not what I would use if presented with a specific, important case.
They’re correlated with morality (which is why I use them) but not perfectly. I also know murder is bad because I was taught so, because many people think murder is bad, etc… All is evidence, strong or weak.
In no way does it tell me why something is moral, although if I try to go up the reasoning chain I might find something interesting.
In this case, I found that my reasoning was stopped at “God said so.” and I was unsatisfied, hence why I sought help.
Sorry, I thought it was obvious.
In the latter case, morality is what God says it is.
Of course, there is no arguing that morality is not what god says it is, because then it just becomes a matter of semantics and correctly tabooing our words, which is why I insisted about the gut level :
I cannot imagine anything that makes me feel the world is as it is if God does not exist.Philosophical arguments, explanations that the world is not moral, better definitions of morality, they’re all nice but in the end, you won’t convince me that a monkey birthed a human.
If I am making a mistake in believing that you believe that the monkey birthed a human, I want to know what that mistake is in order to learn about evolution.
Oups, my wording was misleading.
I know about evolution, and I know how the human species came to be according to evolution.Since evolution argues that monkeys birth humans about as much as catholicism argues that Amalekites should die, I meant that I believed I was potentially making a huge mistake about morality, the same way I would make a huge mistake by thinking evolution claims monkeys birth humans.
If I had made such a blatant mistake, I hoped someone could point it out for me.(Please don’t argue whether monkeys birth humans. I am aware that in some sense they did and in some sense they don’t. That’s really not the point.)
It’s regrettable that so little of the conversation in the comments was about the post itself, because 1- I have found such discussions under other posts to be very insightful and 2- I was disappointed that so few comments were useful.
As someone else mentioned, friends arguing about what place to have dinner at often eats up significant amounts of time. Based on my personal experience, people will often feel grateful if you take responsibility, because they care less about the meal than about not infringing on other’s meal preferences. Thankfully, if they are so respectful that they will do that, they will also probably be open to fixing the issue.
More generally, to supplement Yudkowsky’s argument that people should be more willing to stand up, here are a couple ways to help you do it : -Make a quick estimate of expected utility : little to no ill consequences if you fail, and big payoff if you succeed. -The payoff is not only in actually achieving something or in acquiring social status, but also in knowing your reasoning was stronger than your instinct. -It will benefit other people greatly. Do it for them ! -It will make it easier to take this kind of decisions in the future.
Remember how in another post you argued a rationalist should be able to reserve his knowledge of it was taken away ? I believe this is a similar approach as the one taken by these hypothetical Jesuits. In fact, I see two possible ways to explain such a behavior : one could ask a physics student whether Newtonian physics were not the absolute best if they expected the student to discover relativity by themselves. Likewise, I guess the hypothetical Jesuits could want two separate benefits out of this : -Ensuring the student is savant/fanatic enough to join the tribe. -Teaching the student to discover core beliefs of their faith by themselves, both reinforcing these beliefs and assuring their correctedness.
Consider RYY : your best probabilistic guess of your next move. Assuming you know yourself perfectly (or good enough to predict your own moves reliably at least), it will turn out RYY is very similar to you (RYY is not deterministic if you are not, but then it is still an opponent as skilled as you for all chess-related purposes).
Then since you have shown you win against RYK, I can guess that RYY will reliably win against RYK, which I find very surprising.
Why is emulating a stronger player less efficient than emulating yourself ? (It sounds more surprising to me formulated like that than how you said it.)
The explanation I see is that you don’t know Kasparov well enough to emulate him correctly (what you already pointed out) whereas you know yourself very well. Then, the question that comes to my mind is : how can you use this knowledge to improve your play ?
I have received the advice “what would X do in your stead” by a number of people in a number of circumstances, including here by rationalists. How can it be useful ? If it is helpful, then it means that your cognition algorithm can be optimized and you know a specific way to improve it. If you find yourself frequently finding that wondering about what someone else would do is helpful, then there is additional computation that could be saved if you knew how you know how that someone else behaves (by doing it directly instead of imitating) . So, it’s purely a matter of knowing yourself : the advice I had received was no better than “think about yourself”.
The other question I wonder about is how that applies to artificial intelligence. I don’t know more much about it. Is the “know yourself” important in that case ? How does an AI see its own source code ? I guess the first step toward making this question meaningful would be to design precisely how the emulator works. A naive approach (that doesn’t account for styla and psychology) would be to use statistics for every position, and be random if an unprecedented position occurs. Then it becomes clear that there is no “default player” to emulate. There is no such thing as the emulator emulating itself because the emulator is not a player. Bummer.
Update : nowadays, top chess engines (AlphaZero, Stockfish 13) rely on neural networks which are basically black boxes. It doesn’t undermine your point though. NL’s objection is indeed invalid.
I don’t think the two closed answers of “Have you stopped beating your wife ?” have such a well-defined meaning. Since this is natural language, and I understand a no as meaning “I’m still beating her.” and I expect most people to interpret a no the same way as I, then it’s not from obvious why this interpretation is incorrect (if we ignore that the sentence is typically used as an example that has no good answer. Use “Will you stop smoking soon ?” which is less standard for the sake of the argument.)
Funnily enough, it seems the less meta your beliefs, the less distortedly you can transmit them into the chronophone.
If you believe in God not because society said so, or because you were taught it as an infant, or because it’s proper, if you truly believe it for itself as an uncaused truth (not because it’s your most fundamental belief), then it might just come out exactly the same.
Ask deluded patients in psychiatric hospitals to talk into the chronophone and Archimedes might learn about Jesus and Napoleon.
This reminds me a lot of existentialcomics.
I think it’s noteworthy that absolute laws are more easily respected when what you actually want are laws with exceptions.
The correct law may be “don’t kill unless it’s right” but just saying “don’t kill” will actually make people think about it twice before killing.
I think you mean lightspeed travel ?
That doesn’t rule out infinite computation, though, since in an infinite universe we have a perpetually increasing amount of resources (as we explore further and further at lightspeed).
That’s interesting… Did you actually count sheep and rocks when writing this article? Did the character you give voice to count sheep and rocks?
Usually, when I make this kind of arguments, what I really say is “If I counted 2 sheep and 3 sheep, I would find 5 sheep” which means that it actually is what I expect, but that’s not evidence if my cognition process is put into question.
Yet, I don’t think it is necessary to actually count sheep and rocks when making this argument… But if I was discussing with someone who thought that 2 + 3 = 6 (or someone who thinks that either answer is meaningless), then it would be necessary to make the experiment, because we would expect different results.
He means that in the counterfactual world where he didn’t find this book, he became normal. In that case, he would have wished that his parents had not let him read this book (which is precisely what would have indeed happened).
I don’t understand how you are supposed to know that someone is not very well informed about the source of their beliefs, that their reasoning is not what they claim it is.
I can see why someone would rationalize a wrong conclusion they have reached, but symmetrically, I would be be rather upset (and rightly so) if someone accused me of rationalizing when I believe I’m not, because (from experience I know that) I am acutely aware of my own beliefs, and most of the time better than others.
Surely, some people are not as reliable as me. But I don’t think it would be prideful to judge that the lambda person is a priori less aware of their own beliefs than I am.
Therefore I should think that in lack of evidence, people are as good as me at being unbiased.
Specifically, my main issue is that statement :
How can I reliably know that someone’s reasoning is biased ?
From your post, it is suggested (I have not read all the articles here, and I will be glad if someone can link to one that solves the problem) that you can decide that the laptop buyer is biased because his conclusion (buying the shiny laptop) is wrong.
Although it sounds like a handy method, it is not helpful when the core of the issue stems from a debate over whether or not the conclusion is correct.
Even if, assured by a very reasonable reasoning, you know that you are right, and we know that you are right because you can explain it, I am not sure it would be enough to convince the laptop-buyer to change his mind.
(I do think it is important to change the laptop buyer’s mind, even though it’s a different topic.)
So, this post is very useful if I am a potential laptop-buyer ; it could be useful if I meet a laptop-buyer, but I don’t know how (except of course if he’s a laptop-buyer who easily changes his mind).