I have a few thoughts on startups. Let me know what you guys think:
1) People have commented that they are bad businessmen. “Business” isn’t mysterious. Business is Like Chess. People tend to (at least implicitly) treat it as a barrier rather than an overcomable obstacle. (Paul Graham said he made this mistake of “being afraid of business”.) If you’re a rational person, you could read a few books, apply your rationality skills, and probably be a very competent businessman.
2) Startups are hard, but doable. Very doable. There are tons of things people want, and to generalize, it really just comes down to making something that people want. Sorry, I don’t really know how to support this belief. It really just comes down to me believing that there are so many opportunities to start multi-million dollar businesses.
3) Startups are by far the best way to make a lot of money and contribute to the world (assuming that the value of money is rather inelastic). Way better than things like jobs in finance. This really comes down to what I said in 2), that they’re doable and you have a high chance for success if you’re rational. Also, even if you apply normal standards for chances of success (rather than my proposed standards that are much higher), it’s still much better than things like jobs in finance. See this. The reason people choose safe jobs is because they are risk averse, but if you are mostly interested in doing good for the world, you wouldn’t be so risk averse, and you’d choose startups because they have the highest expected value.
You point out a potential flaw in the reasoning for concluding ‘scope insensitivity’. But you then seem to go off into saying that ‘scope insensitivity is incorrect’, and I don’t think you supported that claim enough. Remember, reversed stupidity is not intelligence.
Free will is basically asking about the cause of our actions and thoughts. The cause of our neurons firing. The cause of how the atoms and quarks in our brains move around.
To know that X causes the atoms in our brain to move a certain way, we’d have to know that every time X happens, the atoms in our brain would move in that specific way. The problem is that we would have to see into the future. We’d have to see what results from X in every future instance of X. We don’t have that information. All we have are our past and current experiences, that we use to induce what will happen in the future. (This is nothing new, just the induction fallacy.)
So, it seems that we can’t determine causes. Maybe somehow if our understanding of physics allows us to deconstruct time and see the future, we might be able to determine causes, but right now we can’t do that, so it seems that we can’t determine causes.
If we can’t determine causes, we can’t know whether or not we have free will.
Let’s consider two possibilities:1) our “consciousness” causes the atoms in our brain to move in certain ways2) “physics” causes the atoms to move the way they do
Regardless of whether (1) or (2) is correct, it wouldn’t lead to any different experiences for us. We’d still act and think the way we do, and we’d still psychologically feel like we’re in control of our thoughts and actions. I think this is what Eliezer is saying; that the question of free will is pointless because regardless of what the answer is, it won’t lead us to different experiences.
My objection—just because we don’t know the true cause doesn’t mean we can’t. Knowing the true cause would (at the very least) be interesting. For that reason, I don’t think the question of free will is “meaningless”. I know it doesn’t seem like we could know the true cause, but it’s tough to predict what we might know, say, a million years from now.
Objection to myself: I’m not sure exactly what I mean by consciousness. If “consciousness” doesn’t “mean something”, then the question is basically a matter of physics and what laws of physics govern the movement of the atoms in our brains, which isn’t as interesting, at least to me.
Unfortunately, I’m not sure what it would mean for “consciousness” to be the cause of the atoms in our brains moving. As far as our experiences and ability to measure things goes, it probably doesn’t “mean” anything. I guess that that is the point Eliezer is making.
I’m still notably confused, but I’m definitely getting closer. I would very much appreciate it if anyone could help me understand why it doesn’t mean anything for “consciousness” to cause the atoms in our brains to move.
1) I think it’s important to keep in mind that consciousness might not be determined on the neuronal level. It might be determined at the atomic or subatomic level. It’s encouraging that there are cases of people losing consciousness for various reasons (http://www.alcor.org/sciencefaq.htm) who regain consciousness, but we don’t know how cryogenic freezing or how death change the equation.
2) We don’t know what happens when we die, and how to value that. For example, personally, I rarely remember my dreams. If I didn’t know better, I’d think that I just go unconscious every night, and wake up conscious. But, it turns out that every night I experience dreams. Which means that… in a way… I have multiple conscious experiences every night—I just don’t remember them. How do we know that dying isn’t like this? On a very basic and fundamental level, we don’t understand what happens to make us conscious. For this reason, dying seems to me like a big question mark in the equation of calculating expected utility values.
My opinion is that death being a question mark gets an expected utility value of zero (could be good, bad, nothingness… idk), and that cryonics get a slight/moderate positive expected utility value. But I still feel very uneasy about all of this. I feel like I’m making a decision about something with tremendous importance (eternity), and have frighteningly little information about it (death, efficacy of cryonics). Other people seem to be relatively comfortable just choosing cryonics and not thinking twice, and I don’t know why this is.
I’ve never lucid dreamt, but it seems utterly amazing. If you could do almost anything you could imagine, wouldn’t you prefer that to real life? Does anyone find themselves, or know of anyone who can lucid dream and wants to spend their time lucid dreaming instead of being awake?
The analysis and recommendations seemed very “high-level mappy” to me.
The question is really about how we control our behavior.
“Habits” are really just decisions to do certain things. Maybe they involve less thought, but they certainly aren’t unconscious actions.
Saying to just keep the cue and reward but change the routine seems very “high-level mappy” to me. Surely you could come up with a better algorithm for changing undesirable behavior than that. This would involve identifying the more root causes of the behavior, and addressing them. For example, I think the root cause of a lot of bad habits is poor rationality. People would be able to resist bad habits more if they had better decisions making an analysis skills.
I think that the real determinant of conversation quality is the people participating (not the techniques). They have to want to analyze rationally, and they have to know how to. Most people don’t want to, and don’t know how to. I think the techniques you talked about might help a little bit though.
WIth this hypothesis of mine in mind, I think a better question is, “how do you find the right people to have conversations with?”.
Something about being up late at night leads to good conversations, in my experience. So does revealing something personal about yourself (it pressures others to reciprocate).
I find that the signaling aspect maybe accounts for 20-30% of the phenomenon.
I think that about 20-30% of the time (off the top of my head), my good conversations happen late at night because we stay up late because we’re in a good conversation. But more often, we stay up late because we just don’t want to go to sleep, and then it is like 3 in the morning, and something about it being 3 in the morning triggers good conversation.
It would be interesting to test what actually about “it being late” triggers the good conversation. For example, you could test to see if it is tiredness, or time of the day, or how many hours it’s been since you woke up.
I suspect that this is very simple. Similar to the tree in the forest problem that Eliezer wrote about, if you ask about concrete variations of this question, the right choice is obvious.
One question is what to do when the boxes are in front of you.
If it is the case that you know with 100% certainty that the contents of box B will not change, then you should two-box.
If it is the case that Omega could change the contents of the box after he presents them to you, then you should one-box.
If it is the case that your present decision impacts the past, then you should one-box, because by one-boxing, you’d change your past mind-state, which would change the decision of Omega. However, I don’t think that physics works like this. I’m assuming that there is a point in time where what you thought in the past is what you thought in the past, and that those thoughts are what Omega based his decision on, and what you think and decide after Omega made his decision isn’t influencing your past mind-states, and thus isn’t influencing the decision that Omega made. But this is really a question about physics though, not decision theory. When you ask the question with the condition that physics works a certain way, the decision theory part is easy.
Another question is what to do before Omega makes his decision.
It seems plausible that Omega could read your mind. So then, you should try to make Omega think that you will one-box. If you’re capable of doing this and it works, then great! If not, you didn’t lose anything by trying, and you gave yourself the chance of possibly suceeding.
I think my third bullet point addresses your comment. You seem to be saying that by choosing to two-box, your influencing the past in such a way that’ll make Omega one-box. I’m saying that there are two possibilities:1) your choice impacts the past2) your choice doesn’t impact the past.
If 1) is true, then you should one-box. If 2) is true, then you should two box. I honestly don’t have too strong an opinion regarding whether 1) or whether 2) is the way the world works. But I think that whether 1) or 2) is true is a question of physics, rather than a question of decision theory.
Consider this sequence of events: you had your prior mind-state, then Omega made his choice, and then you make your choice. You seem to be saying that your choice is already made up from your prior mind-state, and there is no decision to be made after Omega presents you with the situation. This is a possibility.
I’m saying that another possibility is that you do have a choice at that point. And if you have a choice, there are two subsequent options: this choice you make will impact the past, or it won’t. If it does, then you should one-box. But if it doesn’t impact the past (and if you indeed can be making a choice at this point), then you should two-box.
I don’t know whether you’ll have any way of knowing if your choice was made up already. I wish I knew more physics and had a better opinion on the way reality works, but with my understanding, I can’t say.
My approach is to say, “If reality works this way”, then you should do this. If it works that way, then you should do that.”
Regarding your question, I’m not sure that it matters. If ‘yes’, then you don’t have a decision to make. If ‘no’, then I think it depends on the stuff I talked about in above comments.
I very well might be wrong about how reality works. I’m just saying that if it happens to work in the way I describe, the decision would be obvious. And furthermore, if you specify the way in which reality works, the decision in this situation is always obvious. The debate seems to be more about the way reality works.
Regarding the Hannibal Lector situaiton you propose, I don’t understand it well enough to say, but I think I address all the variations of this question above.
I think I agree with your description of how choice works. Regarding the decision you should make, I can’t think of anything to say that I didn’t say before. If the question specifies how reality/physics works, the decision is obvious.
If your choice is not made up from your prior mind state, then Omega would not be able to predict your actions from it.
If your choice is not made up from your prior mind state, then Omega would not be able to predict your actions from it.
Not necessarily. We don’t know how Omega makes his predictions.
But regardless, I think my fundamental point still stands: the debate is over physics/reality, not decision theory. If the question specified how physics/reality works, the decision theory part would be easy.
I keep saying that if you specify the physics/reality, the decision to make is obvious. People keep replying by basically saying, “but physics/reality works this way, so this is the answer”. And then I keep replying, “maybe you’re right. I don’t know how it works. all I know is the argument is over physics/reality.”
Do you agree with this? If not, where do you disagree.
I think your main point—that selective application of rationality could be dangerous—is true. But the question then is how often is it dangerous? And in what way should we apply rationality? Should we not apply rationality because it could be dangerous? I think the article would have been much better if these questions were brought up, and addressed.
I get the sense that applying rationality is usually more good than bad. Although I don’t really know enough about radical religions to say if it’s true for them too.
I suspect that making people more rational would be one of the most efficient ways of saving the world. Things like AI might be better, but I really think it’s pretty high up there as far as saving the world goes.
My reason for thinking that it’d make the world a much better place, is because making people more rational would lead to lots of better decisions and actions, which when aggregated, consist of a huge change. I’ve never really been able to articulate this well, but I think that this article illustrates part of what I mean: if people were more specific, they’d make better business decisions. Can’t you imagine how all of these better business decisions would have a huge impact on the world? And can’t you imagine how all of the other types of better decisions would also have a huge impact on the world?
I have 2 ideas about how to make people more rational. I’d love to get some feedback on them!1) Popularize a medium for rational discussion. I’m not sure how I’d popularize it, but I suspect that it’s doable.2) Create a better system of education and make rationality a major part of the curriculum. 2a) I think that I could make this better system, prove that it’s better by letting people use it for free and showing results, and that this will draw interest to it. 2b) I don’t think there would be too much opposition to teaching rationality as part of the curriculum. 2c) I think learning about rationality can be fun. I think the material on Less Wrong could be made more accessible to the general population too.
(I recognize that 2) is a huge claim, and I suspect that I didn’t think everything out properly, but I still think it has lots of promise.)
The best idea I have for teaching rationality (in the general sense) is to:1) explain the concepts to people (ie. explain the idea of consequentialist thinking, and the rationale behind it).2) have people write essays about thoughts/ideas they have (they should be excited to write these essays), and then peer review the essays, pointing out errors in rationality. Like not supporting claims with evidence. Then have an instructor go over the essays and the evaluations to make sure they did a good job.
Also, I think what you’re doing right now—crowd sourcing—is probably the best thing for idea generation.