And then you move on to meta-epiphanies…
Incorrect
It is absurd to divide people into good and bad. People are either charming or tedious.
-- Oscar Wilde
I think it is intended to mean “If you want to accomplish impractical things, work on practical subtasks.”
I don’t see what’s wrong with that.
Are you supposing that oxygenating a human’s blood without the use of lungs would result in the loss of their soul?
I think you will find that the only way to exclude such hypothetical possibilities is to define death as sufficient brain damage (although I suppose you could define it as cessation of neural activity if you don’t mind the possibility of dead people coming back to life; that would still result in a very large proportion of souls being damaged)
Taboo death
Are you intending to raise the traditional question of whether verificationism is itself verifiable?
I’m just trying to understand the statement Eliezer is making in this post.
What would you expect to experience differently if the axiom of choice were true or false?
I don’t think the axiom of choice is a first-order tautology so you wouldn’t call it true or false. It could be inconsistent within certain popular theories in which case for each inconsistent theory I would expect the the negation of the conjunction of the axiom of choice and the theory to eventually appear in an enumeration of first order validities.
I don’t understand what it would mean for logical positivism to be true/false. What should I expect to experience differently in each case?
“I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads—can I have $1000?”
Obviously, the only reflectively consistent answer in this case is “Yes—here’s the $1000”, because if you’re an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers “Yes” to this sort of question—just like with Newcomb’s Problem or Parfit’s Hitchhiker.
- Timeless Decision Theory: Problems I Can’t Solve—Eliezer_Yudkowsky
I don’t understand why “Yes” is the right answer. It seems to me that an agent that self-modified to answer “Yes” to this sort of question in the future but said “No” this time would generate more utility than an agent that already implemented the policy of saying yes.
If I was going to insert an agent into the universe at the moment the question was posed after the coin flip had occurred, I would place one that answered “No” this time, but answered “Yes” in the future. (Assuming I have no information other than the information provided in the problem description.)
The world is very, very much more complex than my mind.
But the smallest description of your mind might implicitly describe the universe. Anyway, Solomonoff induction is about predictions, it doesn’t concern itself with untestable statements like solipsism.
Solomonoff induction isn’t true/false. It can be useful/not useful but not true/false.
If that first agent (that answers no, then self-modifies to answer yes) had been in the situation where the coin had fell heads, then it would not have got the million dollars; whereas an agent that can “retroactively precommit” to answer yes would have got the million dollars.
But we know that didn’t happen. Why do we care about utility we know we can’t obtain?
So having a “retroactively precommit” algorithm seems like a better choice than having a “answer what gets the biggest reward, and then self-modify for future cases” algorithm.
For what goal is this a better choice? Utility generation?
Nah, if I don’t waste time on the internet I very easily find other ways to waste time instead.
Although techniques for gathering information on students’ CTS can come in a variety of forms, the most objective, standardized measures, however, are multiple-choice tests (e.g. Watson-Glaser Critical Thinking Appraisal, Cornell Critical Thinking Test, California Critical Thinking Skills Test, and the College Assessment of Academic Proficiency tests).
I think the method of testing deserves emphasis. I’m uncertain how CTS multiple-choice test performance would impact instrumental rationality. Relevant evidence would be appreciated.
“I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads—can I have $1000?”
Obviously, the only reflectively consistent answer in this case is “Yes—here’s the $1000”, because if you’re an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers “Yes” to this sort of question—just like with Newcomb’s Problem or Parfit’s Hitchhiker.
- Timeless Decision Theory: Problems I Can’t Solve—Eliezer_Yudkowsky
I don’t understand why “Yes” is the right answer. It seems to me that an agent that self-modified to answer “Yes” to this sort of question in the future but said “No” this time would generate more utility than an agent that already implemented the policy of saying yes.
If I was going to insert an agent into the universe at the moment the question was posed after the coin flip had occurred, I would place one that answered “No” this time, but answered “Yes” in the future. (Assuming I have no information other than the information provided in the problem description.)
Why do modern-day liberals (for example) generally consider it okay to say “I think everyone should be happy” without offering an explanation, but not okay to say “I think I should be free to keep slaves”, regardless of the explanation offered?
“I think everyone should be happy” is an expression of a terminal value. Slavery is not a typically positive terminal value, so if you terminally value slavery you would have to say something like “I like the idea of slavery itself”; if you just say “I like slavery” people will think you have some justification in terms of other terminal values (e.g. slavery → economics → happiness).
So, to say you like slavery implies you have some justification for it as an instrumental value. Such justifications are generally considered to be incorrect for typical terminal values and so, the “liberals” could legitimately consider you to be factually incorrect.
Understanding the opposite sex is hard. Not as hard as understanding an AI, but it’s still attempting empathy across a brainware gap: trying to use your brain to understand something that is not like your brain.
As Eliezer so often asks, could you be more specific?
I’m having trouble thinking of specific examples of the opposite sex being harder to understand than my own and thus I don’t really understand EY’s statement.
Maybe that’s just my incompetence… but I am skeptical that any man fully understands women or vice versa
Does this include people who change gender?
So is it impossible to guess and be lucky? Usually in this context the word “magic” would imply impossibility.
How did you come to know this quantum state?
We guessed and got really lucky?
Let’s not create people who don’t want to exist in the first place! Infinite free utility!
But is it necessary to divide people into good and bad? What if you were only to apply goodness and badness to consequences and to your own actions?