Hello. My name is Avi. I am an 18 year old Orthodox Jewish American male.
I found out about LessWrong through HPMOR. I was very impressed by the quality and consistency of the writing.
I’m partly through the sequences (in middle of the quantum one currently) and I have a lot to say on much of what I’ve seen, but I decided not to post too much until I’ve finished all the sequences. Most of what I’ve seen seems correct, and then there’s posts here and there that I think have logical errors.
I was a little disappointed that most of my comments got voted down (I’m at −3 Karma now) . Can anyone tell me why?
It looks like I downvoted three of your previous comments. Sorry about that (not really, it had to be done). Here is my reasoning, since you asked:
Your comment on AI avoiding destruction suggested that you neither read the previous discussion of the issue first, nor thought about it in any depth, just blurted out the first or second idea that you came up with.
Your retracted FTL question indicated that you didn’t bother searching online for one of the most common questions ever asked about entanglement. Not until later, anyway. So the downvote worked as intended there.
Your comment on the vague quasi-philosophical concept of superdeterminism purported to provide some sort of a proof of it being not Turing-computable, yet did not discuss why the T.M. would not halt, only gave some poorly described thought experiment.
I am sorry you got a harsher-that-average welcome to this forum, I hope your comment quality improves after these few bumps to your ego.
I’m partly through the sequences (in middle of the quantum one currently)
Good for you. Note that the Quantum sequence is one of the harder and more controversial ones, consider alternative sources, like Scott Aaronson’s semi-popular Quantum Computing Democritus, written by an expert in the field.
I have a lot to say on much of what I’ve seen, but I decided not to post too much until I’ve finished all the sequences.
That’s quite wise. If you write down what you want to say and then look back at it after you finish reading, you will likely find your original thoughts naive in retrospect. But a good exercise nonetheless.
If at some point you think that after a cursory reading of some post you found a hole in Eliezer’s reasoning that had not been discussed in the comments, you are probably mistaken. Consider this post of mine as a warning.
Also note that as a self-identifying “Orthodox Jewish”, you are bound to have compartmentalized a lot, and Eliezer’s and Yvain’s posts tend to vaporize these barriers quite spectacularly, so be warned, young Draco. Your original identity is not likely to remain intact, either.
Joining these forums can serve as something of a reality check to gifted young people; they may be used to most any half-baked thought still being sufficient to impress their environment. Rarely is polish needed, rarely are “proofs” thoroughly nitpicked. Getting actual feedback knocking them off of their pedestal (“the smartest one around”) can be ego-bruising, since we usually define ourselves through our perceived strengths. Ego-bruising, yet really, really important for actual personal and intellectual growth.
Blessed be the ones growing up around other minds who call them out on their mistakes, intellects against which they can grow their potential.
(I don’t mean this as applying specifically to Avi, but more as a general observation.)
Smart people growing up in environments where most people around them are less smart tend to develop a highly convenient habit of handwaving or bullshitting through issues. However when they find themselves among people who are at least as smart as they are and some are smarter, that habit often leads to problems and a need for adjustment :-)
Does that go both ways? That is, can I “nitpick” other people’s comments and posts? Also, if I find a typo in a post (in the sequences so far, I’ve spotted at least 2), is it acceptable to comment just pointing out the typo?
This is my own practice. My reasoning is that pointing out a typo is of no enduring interest to other readers, and renders the comments section less valuable to other readers; so if it’s convenient to contact the author more quietly, one should.
I don’t think I would have minded as much if there would have been comments explaining why they thought I was wrong. It was the lack of response that bothered me.
(And what’s with this “You are trying to submit too fast”? I’m not allowed to post too many comments in a row?)
And what’s with this “You are trying to submit too fast”? I’m not allowed to post too many comments in a row?
Yes. If I remember correctly, LW also implements some form of slow-banning (the amount of time required between your comments depends on your total karma), but I may be recalling a feature request as an implemented feature.
From your post that you linked: “Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies.” I think that’s what I did on my FTL comment. (Incidentally, I had looked online and found several different versions of an experiment that said the same as I did in different ways, but the answers didn’t explain well enough for me).
I actually spent at least an hour reading through the comments on that AI post, and decided that the previous discussion wasn’t enough for my idea.
I’m not too good at anticipating which part of my arguments people will disagree with or not understand, so that may be why I don’t explain fully. I was hoping for a response that I could then see what’s missing and fill it in. It’s usually better explained in my head than I write down.
If at some point you think that after a cursory reading of some post you found a hole in Eliezer’s reasoning that had not been discussed in the comments, you are probably mistaken.
I read most of the posts offline in ebooks. That means I don’t see the comments unless I then go online and look. Is there a set of ebooks that includes comments? (For all I know, most of my ideas have already been said and refuted.)
I don’t know, but sounds like a good idea. Would be rather Talmudic in spirit. Unfortunately, most of the comments are fluff not worth reading, and separating the few percent that aren’t is not that easy. Maybe pick the threads with top 10 comments by karma or something.
And is he perfect?
Oh, far from it. I think that some of his statements are flat out wrong, but I only make this determination where either I have the relevant expertise or several experts disagree with him after considering his point in earnest.
Also note that replacing “Everett branches” with “possible worlds” works in 99% of the decision-theoretic arguments Eliezer makes, so there is no need to sweat MWI vs other interpretations. I would be more interested to hear your opinion on the Trolley problem, the Newcomb’s problem, and the Dust specks vs Torture issue. Assuming, of course, that you have studied it in some depth and went over the various arguments on both sides, the process you must be intimately familiar with if you have attended a yeshiva.
I’ve seen Newcomb and Dust specks vs Torture but not Trolley (although I’ve seen that one before in other places). Which sequences do I need to finish for those?
If the trolley one is the same as the “standard” version, then it’s fairly trivial within the framework of Orthodox Judaism (if I’m allowed to bring that in), because of strict rules about death. I’ll elaborate further when I’m up to the question. The other two are a lot more complicated for me.
Yes, the standard Trolley problem, sorry. For more LW-specific problems, consider Parfit’s hitchhiker.
it’s fairly trivial within the framework of Orthodox Judaism (if I’m allowed to bring that in), because of strict rules about death.
Of course you are allowed to bring it in. And, unless you insist that it is the One True Way, as opposed to just one of many religious and moral frameworks, you probably will not be judged harshly. So, by all means!
So according to Orthodox Judaism, one is not allowed to (even indirectly) cause a death, even when the alternative is considered worse. The standard example is if you’re in a city and the “enemy” demands you hand over a specific person to be killed (unjustly), and says if you don’t do so, they will destroy the whole city and everyone will die (including that person). The rule in that situation is that you aren’t allowed to hand them over. Accepting that as an axiom, the trivial answer to the trolley situation is “don’t do anything”. Maintain the status quo. You cannot cause a death, even though it will save ten other people.
Parfit’s hitchhiker also appears trivial. It seems to assume I place no value on telling the truth. As I do, in fact, place a high utility on being truthful (based on Judaism) , my saying “Yes” will translate into a truthful expression on my face and I will get the ride.
Note: I got the link from searching for “midvar sheker tirchak”, which is the Bible’s verse that says not to lie, roughly translated as “distance yourself from falsehood.
On another topic, if I think that it is the “One True Way”, but don’t say that, is that OK?
So according to Orthodox Judaism, one is not allowed to (even indirectly) cause a death, even when the alternative is considered worse.
Hmm, I see. So, a clear and simple deontological rule. So, if you see your children being slaughtered in front of you, and all you need to do to save them and to kill the attacker is to press a button, you are not allowed to do it?
Also, does this mean that there cannot be Orthodox Jewish soldiers? If so, is this a recent development, given that ancient Hebrews fought and killed without a second thought? Or is there another reason why it was OK to kill your enemy in King David’s time, but not now?
Parfit’s hitchhiker also appears trivial. [...]
Right, ethical systems which value honesty absolutely have no difficulty with this. But
As I do, in fact, place a high utility on being truthful
is this a utilitarian calculation or an absolute injunction, like in the previous case, where you are not allowed to kill, no matter what? Or is there some threshold of (dis)utility above which lying is OK? If so, what price demanded by the selfish driver would surely cause a good Orthodox Jewish hitchhiker to attempt to lie?
On another topic, if I think that it is the “One True Way”, but don’t say that, is that OK?
First, note that I do not represent LW in any way and often misjudge the reaction of others. But my guess would be that simply stating this is not an issue, but explicitly using this belief in an argument may result in downvoting. This community is mildly hypocritical in this regard, as people who push their transhumanist views here as “the best/objective/universal morality” (I am exaggerating) can get away with it, but what can you do.
I may not have given enough detail. The prohibition against killing is specifically innocent people. There is a death penalty for many crimes, including murder (although not as far as EY seems to think. He once said that the Bible gives the death penalty for crossdressing. Evidence suggests otherwise. But that’s another topic.) So:
So, if you see your children being slaughtered in front of you, and all you need to do to save them and to kill the attacker is to press a button, you are not allowed to do it?
Assuming this attacker is the one killing or threatening to kill your kids, you are allowed to kill him (although you are supposed to try to injure them if killing isn’t necessary to stop them). You wouldn’t be allowed to kill someone else who is innocent, even to save many people.
Also, does this mean that there cannot be Orthodox Jewish soldiers? If so, is this a recent development, given that ancient Hebrews fought and killed without a second thought? Or is there another reason why it was OK to kill your enemy in King David’s time, but not now?
I don’t know if you’re familiar with the current debate in Israel over the draft? It’s not really related, though. Again, the “ancient Hebrews” fights, were usually either to reclaim parts of Israel which belonged to them from the gentile nations that were inhabiting them, or to defend themselves against attackers. In both scenarios, the “victims” weren’t innocent. For some more info, see here, here, and here.
(By the way, I just saw this while looking up that last link, which (mostly) confirms what I said about the Trolley problem.
is this a utilitarian calculation or an absolute injunction, like in the previous case, where you are not allowed to kill, no matter what? Or is there some threshold of (dis)utility above which lying is OK? If so, what price demanded by the selfish driver would surely cause a good Orthodox Jewish hitchhiker to attempt to lie?
I realized after I posted that answer yesterday that I could conceive of a case that would work for me, in the spirit of the Parfit’s hitchhiker example. Namely, if I knew that when I got to town there would be someone who’s life I could save, but only with $100. (Also assuming that I’ve got only $100 cash total). That person’s life would take precedence over telling the truth, and I wouldn’t get the ride. There isn’t anything I could do in terms of prior obligation that would override the life concern of that person later.
The prohibition against killing is specifically innocent people.
OK, that makes more sense.
were usually either to reclaim parts of Israel which belonged to them from the gentile nations that were inhabiting them
Seems like a flimsy excuse to slaughter babies. Though I suppose the Amalekite case can be somewhat justified by an uncharacteristically utilitarian calculation on God’s part if Amalekites presented an x-risk to Hebrews. But that is not how the issue is usually presented.
From your link:
The Brisker Rav inferred that this indicates that they did not accept the seven mitzvos or terms for peace (both of which are necessary criteria according to the Kesef Mishne’s interpretation of the Rambam1), otherwise they would not have been called “sinners”
...so they wiped out every woman and child? In any case, this inference seems like an extreme case of motivated cognition: “what we did was right, therefore they must have done something wrong even if we have no records of what they did”. Further reading of your links provides a fascinating insight into how far this motivated cognition can lead otherwise very smart people.
That it is indeed a case of motivated cognition can be trivially shown by transplanting the question into a modern setting and asking under which circumstances it would be ok to wipe out a whole people today. The answer is clearly “none” (I hope). Yet what (ostensibly) happened then has to be justified at any cost, or admit that Saul and Samuel were little better than Hitler and Pol Pot. Or that human ethics has evolved and what was acceptable back then is a high crime now.
What happens if instead of “causing” a death, you’re doing something with some probability of causing a death? For instance, handing someone over to the enemy results in a 99% probability of them being killed by the enemy. What if it’s only 10%? What if the enemy isn’t going to kill him, but you need to drive through a war zone to give him the prisoner, and driving through the war zone results in a 10% chance of the person being killed? What if the enemy says that he’s going to kill one person from his jail no matter what, and he puts the person in the same jail (so that instead of 1 person being killed out of 9 in the jail, 1 person is killed out of a group of 10 that includes the new person, thus increasing the chance this specific person is killed, but not increasing the number of people killed)?
I think that a 99% probability would be the same as 100% for this purpose. A “doubt of death” is considered as strong as a definite death in general. In the war zone example, I think (with a little less confidence) a 10% would work the same. You simply don’t take into account the potential benefits, when weighed against an action that you must do that will cause a death. On the other hand, the person being requested is allowed to sacrifice their own life (or a 10% chance of doing so) to save others. I’ll have to think about your last case a little more.
What if you just need to do ordinary driving, where there’s a fraction of a percent chance of death?
If you couldn’t do things which had any chance at all of killing innocent people, then you wouldn’t be able to drive, or do to a lot of normal things. There’s probably some non-zero chance that the next time you turn on your computer it will trigger a circuit fault that causes the building to burn down an hour later.
If you couldn’t do things which had any chance at all of killing innocent people, then you wouldn’t be able to drive, or do to a lot of normal things.
I think there’s a point where the number is low enough that it can become insignificant, but I’m pretty sure it’s less than 10%. There’s a concept of what considered a “normal risk”.
What if you just need to do ordinary driving, where there’s a fraction of a percent chance of death?
Incidentally, since you mentioned it, there have been attempts by some Rabbis to ban driving for that reason. I’m unable to find a better source currently, but see: this. Some (current ones) have also suggested that one shouldn’t drive for pleasure, but only where there’s an actual need.
I thought about that your last case earlier, and decided it would also not be allowed. You need to consider each person separately. This person will have a 10% chance of being killed due to your action, which forbids it.
Part of the rationale for the rules (I think), is valuing each moment of life, so, for example, someone is considered a murderer if they kill someone who would die anyway in an hour. So causing the person to die earlier, is worse than letting them die later with everyone else.
Okay, here’s another question: Instead of being one person who drives and has a small chance of killing someone, you’re running a big company with a lot of drivers..
If two people drive, the chance of killing someone is about twice that of when one person drives. if a lot of people drive, the chance may add up to enough that it is over your threshhold for insignificant. So is it immoral to run a company that uses a lot of drivers, because statistically the chance of death over many drivers is too large, even though each individual driver is okay?
What if instead of running a company you’re collecting taxes, and collecting taxes costs some people some “moments of life” (since they have to work longer to pay the taxes)? Most people would say that this is okay because the taxes benefit society, but if you aren’t permitted to balance the loss to the individual against the gain to someone else, you can’t use that reasoning.
Or what if you’re running a country and you need to decide whether to have laws that put people in jail? Because of inevitable human error, you’ll be putting more than one innocent person in jail. (Even if you don’t know which person is the innocent one.) If you’re not willing to say “It’s okay to make innocent people lose some ‘moments of life’ as long as it helps others more”, how can you justify having jails?
Some (current ones) have also suggested that one shouldn’t drive for pleasure, but only where there’s an actual need.
Huh. Presumably they would also frown upon any similarly risky activity, like climbing, swimming or even living near Gaza borders, where one might get killed by a rocket.
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn’t count ^W^Wcounts ^Wdoesn’t count ^W^WScrew you Euathlos!
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn’t count ^W^Wcounts ^Wdoesn’t count ^W^WScrew you Euathlos!
I don’t think climbing or swimming are as dangerous as driving. There is an obligation for a father to teach their son to swim, mentioned in the Talmud.
I don’t think climbing or swimming are as dangerous as driving.
They’re a couple orders of magnitude riskier, actually. It’s tricky to make a direct comparison because the risk of driving is usually expressed over distance traveled, while sports is usually measured over number of sessions, but if we assume a typical day’s driving is about 50 miles (80 km), then we’re looking at 0.1 micromorts per session, as opposed to 17 for swimming or 3.1 for rock climbing.
(I’m not totally sure I trust that swimming estimate. The one for rock climbing aligns with my intuition, although there’s a lot of variance within the sport—bouldering is comparatively safe, while attempting the world’s highest peaks is absurdly risky by sports standards. I did know one guy who died in a shallow-water blackout and none who died climbing, for whatever that’s worth.)
[ETA: The estimate for swimming turns out to be bogus. See below.]
The link you gave puts car deaths above swimming in the second diagram. It doesn’t say that the sporting numbers are measured by session. (Except for the BASE jumping, hang-gliding, scuba diving, canoeing, or rock climbing). My own research (the first three links from Googling “risk of car accident death”) puts car accidents consistently higher than swimming deaths.
I believe that’s because people drive much more than they swim, and the risk communication scale uses, say, your second numbers, and the comparison the link author gave converted that from annual to per-act.
I was trying to show that the swimming estimate wasn’t per session. 1 in 56,587 is close enough to 1 in 83,534 that they’re probably measuring the same thing, namely yearly deaths, in which case (assuming most swimmers swim more than 20 times a year, which I think is reasonable), the per-session risk for driving is more than that for swimming.
You’re right, it’s not per session—but it isn’t per year either. On closer examination it looks like they’re calculating the risk of death over the ten years surveyed (unless the 31 deaths reported are annualized, which I don’t think they are), which is an absolutely terrible bottom line—but fine, it makes the annual risk of death 1 in 566,000. I also notice that the population estimate is identical to that for running and cycling, so it’s probably some sort of very crude estimate of Germans involved in sports. Ugh. At least the climbing stats look more reliable.
Incidentally, an annual risk of death of 1 in 566,000 and a hundred sessions per year (two a week with time off for good behavior) gives us a per-act risk of 0.017 micromorts, about equal to driving four miles in a car.
It’s definitely not the chance of death in a year of swimming. My link already gives us all the numbers we need to calculate that—the number of deaths overall, the number of years being examined, and an estimate of the population involved—and it comes out to a chance of 1 in 5,658. (1,754,182 people / (31 deaths / 10 years).)
This conveniently lets us infer how they’re probably calculating the risk—it looks like they’re assuming one hundred sessions per year (or about two a week; fair enough) and doing a per-session estimate based on that. I also notice that the population estimate is identical to that for cycling and running, so it’s probably some sort of estimate of the number of people in Germany involved in an arbitrary popular sport. Cruder than I’d like, but I was only shooting for an estimate good to within an order of magnitude.
assuming most swimmers swim more than 20 times a year, which I think is reasonable
Those numbers look like general population numbers (and since it looks like a lot of drowning deaths are due to ineptitude, it seems unclear to me whether the yearly risk for frequent swimmers is higher or lower than for non-frequent swimmers). Instead of ‘all drowning,’ the 1 in 83,534 number, one should probably use the ‘in swimming pool’ number, which is 1 in 452,738.
I’m not sure I trust these estimates—or, rather, I don’t think I find them useful. The main problem is that the probabilities involved are all strongly conditional.
Consider swimming in a hotel swimming pool with a lifeguard watching and long-distance swimming alone in the ocean. Both are “swimming” but these two activities are radically different from the risk perspective. Similarly, you can do “climbing” in the climbing gym and you can do “climbing” in the Himalayas.
Sure, there’s a lot of variance involved. But there are more and less safe driving habits, too, and I’ll bet the variance is about as high. The point isn’t to demonstrate that one practice is under all conditions more or less safe than another, it’s to compare their average dangers as they’re actually practiced. And that clearly favors driving. It’s a profoundly bad idea to look at a set of statistics like this and say “oh, the ones that look inconvenient to me were probably doing something unsafe, they don’t count”.
On the other hand, these statistics don’t take health benefits from being physically active into account, which could potentially give ammunition for a much stronger critique—though given ike’s comments, I’m not sure it’d be a valid critique in the context of Jewish law.
But there are more and less safe driving habits, too, and I’ll bet the variance is about as high.
I bet less. Yes, you can practice defensive driving, but if you’re on the road in the traffic there is only so much you can do to avoid the idiot who is both in a hurry and needs to send that text message right now. You don’t have much control over external factors. But in swimming you often do—it’s pretty hard to drown if you are swimming in a pool with others watching.
it’s to compare their average dangers as they’re actually practiced
Yes. Therefore if you know you practice in way that’s different from the average, the probabilities change for you.
Yes, you can practice defensive driving, but if you’re on the road in the traffic there is only so much you can do to avoid the idiot who is both in a hurry and needs to send that text message right now.
I wasn’t thinking about defensive driving, I was thinking of driving thirty miles over the limit while not wearing a seat belt and texting your girlfriend about the awesome fight you just saw in the pub.
In pretty much any activity you can asymptotically drive your chance of surviving towards zero if you set your mind to it :-/
If we are talking about variance, the lower safety bound is often in approximately the same place, but the upper safety bound (as well as the center of the distribution) varies.
Yes, but if you’re going climbing you can choose to go the climbing gym and be absolutely safe from the avalanches in the Himalayas. However if you’re going driving on public roads, you cannot make yourself absolutely safe from drunk drivers.
You can make your climbing safer than you can make you driving.
That’s what makes climbing higher variance than driving.
You can make your climbing safer than you can make you driving.
You can make your climbing safer than summiting K2 would be, certainly. But enough safer to overcome those one and a half orders of magnitude of difference in the average? I haven’t actually seen any numbers on this, but that seems optimistic to me.
I’ll have to look at the methodology to believe that one and a half orders of magnitude, but regardless of that yes, you can make your climbing safer.
For example, you can do bouldering on technical routes which are all about agility and finger/arm strength. These routes rarely go more than 10 feet above thick mats—since you’re not belayed, you’re expected to just jump down when/if you run into trouble. Twist you ankle, sure, possible. Die—not very likely.
Some high-profile physicists disagree, others agree. Very few believe in some sort of objective collapse these days, but some still do. This strange situation is possible because MWI is not a well-formed physical model but more of an inspirational ontological outlook.
There’s a big problem with upvotes and downvotes on LessWrong, namely that the two important but skew dimensions of agreement/disagreement and useful/disuseful for rating posts are collapsed into one feature. A downvote can feel like ‘Your comments are bad and you should feel bad (and leave and never post again)’, but this is often not the case.
Downvoting comments by a person asking why the parent comment was downvoted is generally poor form. In your case, it might be because you did it for a few comments in quick succession, which might have made Recent Comments (on the sidebar) less useable for someone so they downvoted the comments. To avoid this in future, maybe add a note in your comments when you post them noting that you are a new user trying to figure out how to tailor your comments to LessWrong and requesting that downvoters explain their downvotes to help you with this. On the other hand, it’s not impossible that someone was being Not Nice and mass-downvoting your comments, which wouldn’t be your fault.
Is “disuseful” a synonym for “unuseful” here or does it mean something else?
Downvoting comments by a person asking why the parent comment was downvoted is generally poor form. In your case, it might be because you did it for a few comments in quick succession,
I’ll add a specific way for newbies to ask why a comment was downvoted without clogging up the recent comments list: edit the original, downvoted comment, appending a little “Edit: not sure why this was downvoted, could someone explain?”-type note. (It’s obvious once you think of it, but easy not to realize independently.)
Is “disuseful” a synonym for “unuseful” here or does it mean something else?
It means something else. I use the dis- prefix to mean the active opposite of the thing to which it is prefixed. So ‘I diswant ice cream’ is a stronger statement than ‘I do not want ice cream’, though most people, whose language is less considered and precise, would (also) use the latter to cover the former. I guess some would say ‘I don’t particularly want ice cream’ to disambiguate somewhat.
I can see several possible connotations and policy suggestions underlying your comment, but not sure which one(s). Can you specify? Like, are you suggesting I update in this specific case or my general inclination to use nonstandard undefined terms or...?
So ‘I diswant ice cream’ is a stronger statement than ‘I do not want ice cream’, though most people, whose language is less considered and precise, would (also) use the latter to cover the former.
Minor point of information. In English “do not want” is not the negation of want. It actually means what you have defined “diswant” to mean. The “not” is privative here, not merely negative. People are not being less considered and precise when they use it this way. They are using the words precisely as everyone but you uses them—that is, precisely in accordance with what they mean.
You are welcome to invent a new language, just like English except that “not” always means simple negation and never means privation; but that language is not English. Neither, for that matter, would the corresponding modification of French be French. Comparing the morphology of translations of “want”, “do not want”, “have”, and “do not have” in a further selection of languages with Google Translate suggests that the range of languages for which this is the case is large.
Minor point of information. In English “do not want” is not the negation of want. It actually means what you have defined “diswant” to mean.
That is indeed often the case, though I notice I feel hesitant to agree that this is always the case and retain a feeling that people use ‘do not want’ in both way, depending on the context. Regardless, when I said:
So ‘I diswant ice cream’ is a stronger statement than ‘I do not want ice cream’
I meant (hohoho) this as a statement about my usage, not the common usage of others.
The “not” is privative here, not merely negative.
Thanks for pointing me to a further point of reference (the term ‘privative’).
un- from West Germanic; e.g. unprecedented, unbelievable
in- from Latin; e.g. incapable, inarticulate.
a-, called alpha privative, from Ancient Greek a-, a?-; e.g. apathetic, abiogenesis.
and it says:
A privative, named from Latin privare,[1] “to deprive”, is a particle that negates or inverts the value of the stem of the word.
It seems like your usage of privative was excluding alpha privative, i.e. mere negation, but the examples and this summary sentence suggest ‘privative’ fails to distinguish (hohoho again) between mere negation and...the other thing. (Inversion? Opposition?) I’d be most amused if linguists had failed to coin a specific term for the subform of privation that is the ‘active opposite’ of something, and had only given a name (‘alpha privative’) to the subform of mere negation.
People are not being less considered
In the literal sense that I have considered these things more than they have, they are.
and precise when they use it this way. They are using the words precisely as everyone but you uses them—that is, precisely in accordance with what they mean.
Localised examples like this seem trivial, but when generalised to encouraging good habits of thought and communication and precision, it’s not just a localised decision about ‘un-’ vs. ‘dis-’, but a more general decision about how one approaches thought, language, and communication.
Also, if you just look at ‘do not want’/‘diswant’ in a vacuum, then yes, it seems like both my usage and the common usage specify what they mean. But the broader question of using negation and ‘not’ in a way that cues the mental process of Thinking Like Logic is inextricable from specific uses of ‘not’. I generally lean towards the position that the upper echelons of a skill like Thinking Like Logic are only achieved by those who cut through to the skill in every motion, and that less comparmentalisation leads to better adoption of the skill. And I feel like it probably intersects with other skills and habits of thought. So trivial cases like this are part of a bigger picture.
I don’t think I understand what you mean by privative. Is it something like the difference between “na’e” and “to’e” in Lojban? For reference: {mi na’e djica} would mean “I other-than want”, and {mi to’e djica} would mean “I opposite-of want”.
That’s pretty much it. Privative “not” would be “to’e”. The English “not” covers both senses according to context, but “not want” is always privative and some lengthier phrase has to be used to express absence of wanting. Or not so lengthy, e.g. “meh”.
Well [-l + come]; one of your comments was erroneous, as you said yourself (the one you retracted), another comment reads like a restatement of a popular comment predating yours by a over year (which you acknowledged yourself), and the third makes a pretty sweeping claim about superdeterminism not being Turing computable. Unfortunately, the proof you provide seems flawed on a couple of counts.* However, even if the proof did turn out to stand, people frown upon comments which do not give more explanations and context to sweeping statements that seemingly come out of thin air (even if they did turn out to be correct). FYI, I didn’t read (until now) or vote on any of your comments.
That makes 3 plausible downvote explanations for 3 comments, two of which you mentioned yourself. I’m surprised about your surprise.
* (Superdeterminism doesn’t require that part of the overall program can be perfectly predicted by a much smaller program in advance, nor that the outcome of the smaller program can then be used to change the overall outcome. At least two reasons: 1) Not being able to verify complete correspondence (except by fiat), given all hidden variables and their potentially unknowable context (unknowable from within the program, and the context may encompass the entire universe); 2) superdeterminism can in principle be saved simply by saying that the agent isn’t able to show a contradiction; i.o.w. in a superdeterminist universe, a perfect prediction-machine conditional on which a contradiction can be derived cannot exist, by definition of what “superdeterminism” means. Your thought experiment would be inapplicable in a superdeterminist universe, strange as it sounds. In that light, your proof reads similar to the one that shows that a Halting problem decider cannot exist. Alternatively, the agent would be unable to use the result to show a contradiction. While such an inability would indeed seem strange, from the universe’s point of view, every facet of that inability would have been predetermined anyways.)
You’re basically saying that superdeterminism doesn’t require Turing computability, not that it is in principle Turing computable. Anyway, my point was that superdeterminism predicts that we will never find a practical way to compute the observed answer to a simple quantum superposition, because that would imply that we could change it.
And I guess I did make a “sweeping claim”, but I was still annoyed that I just got down-voted without a reply. If I had a “sweeping claim” to discuss, how should I have posted it?
The AIbox one I had thought of before seeing that comment, and it’s (in my opinion) stronger than the other one. (And the replies to it didn’t apply to mine fully). As an aside, would I in general be expected to read all 300+ comments on a post before commenting?
If I had a “sweeping claim” to discuss, how should I have posted it?
See “give more explanations and context”. If you’re concerned with “never find a practical way”, that’s an entirely different discussion than “isn’t Turing-computable” (in this community, if something has a strictly technical interpretation, that’s what is defaulted to). Give enough context so that a reader knows what you’re concerned with (practical applications, apparently, see I wasn’t aware of that), instead of a somewhat theoretical sounding claim (which you apparently meant in a more practical way) with a proof that turns out to be wrong, given that strictly theoretical claim. Also, I was only pointing out shortcomings of your proof, to do so no stance regarding Turing computability is required. However, there is no reason to assume that superdeterminism would require incomputability, on the contrary, as long as the true determinist laws of physics are computable, the universe would be as well, no?
As an aside, would I in general be expected to read all 300+ comments on a post before commenting?
Well, at least the top level comments with a couple of upvotes, so you don’t repeat one of the main responses? That boils it down to 35-ish comments.
Turing computability is a technical concept first. You don’t “need” to be strictly technical (obviously), but talking about Turing computability and giving a proof-by-contradiction kind of sends off the vibes of a technical/theoretical point, don’t you think? I was making an observation about how I interpreted your comment, and why, I wasn’t telling you what you need to write about.
Hello. My name is Avi. I am an 18 year old Orthodox Jewish American male.
I found out about LessWrong through HPMOR. I was very impressed by the quality and consistency of the writing.
I’m partly through the sequences (in middle of the quantum one currently) and I have a lot to say on much of what I’ve seen, but I decided not to post too much until I’ve finished all the sequences. Most of what I’ve seen seems correct, and then there’s posts here and there that I think have logical errors.
I was a little disappointed that most of my comments got voted down (I’m at −3 Karma now) . Can anyone tell me why?
Welcome, Avi!
It looks like I downvoted three of your previous comments. Sorry about that (not really, it had to be done). Here is my reasoning, since you asked:
Your comment on AI avoiding destruction suggested that you neither read the previous discussion of the issue first, nor thought about it in any depth, just blurted out the first or second idea that you came up with.
Your retracted FTL question indicated that you didn’t bother searching online for one of the most common questions ever asked about entanglement. Not until later, anyway. So the downvote worked as intended there.
Your comment on the vague quasi-philosophical concept of superdeterminism purported to provide some sort of a proof of it being not Turing-computable, yet did not discuss why the T.M. would not halt, only gave some poorly described thought experiment.
I am sorry you got a harsher-that-average welcome to this forum, I hope your comment quality improves after these few bumps to your ego.
Good for you. Note that the Quantum sequence is one of the harder and more controversial ones, consider alternative sources, like Scott Aaronson’s semi-popular Quantum Computing Democritus, written by an expert in the field.
That’s quite wise. If you write down what you want to say and then look back at it after you finish reading, you will likely find your original thoughts naive in retrospect. But a good exercise nonetheless.
If at some point you think that after a cursory reading of some post you found a hole in Eliezer’s reasoning that had not been discussed in the comments, you are probably mistaken. Consider this post of mine as a warning.
Also note that as a self-identifying “Orthodox Jewish”, you are bound to have compartmentalized a lot, and Eliezer’s and Yvain’s posts tend to vaporize these barriers quite spectacularly, so be warned, young Draco. Your original identity is not likely to remain intact, either.
With these caveats, have fun! :)
Joining these forums can serve as something of a reality check to gifted young people; they may be used to most any half-baked thought still being sufficient to impress their environment. Rarely is polish needed, rarely are “proofs” thoroughly nitpicked. Getting actual feedback knocking them off of their pedestal (“the smartest one around”) can be ego-bruising, since we usually define ourselves through our perceived strengths. Ego-bruising, yet really, really important for actual personal and intellectual growth.
Blessed be the ones growing up around other minds who call them out on their mistakes, intellects against which they can grow their potential.
(I don’t mean this as applying specifically to Avi, but more as a general observation.)
Yep. I’ll put it even more directly.
Smart people growing up in environments where most people around them are less smart tend to develop a highly convenient habit of handwaving or bullshitting through issues. However when they find themselves among people who are at least as smart as they are and some are smarter, that habit often leads to problems and a need for adjustment :-)
Does that go both ways? That is, can I “nitpick” other people’s comments and posts? Also, if I find a typo in a post (in the sequences so far, I’ve spotted at least 2), is it acceptable to comment just pointing out the typo?
Why not PM them first?
This is my own practice. My reasoning is that pointing out a typo is of no enduring interest to other readers, and renders the comments section less valuable to other readers; so if it’s convenient to contact the author more quietly, one should.
Yes. I recommend using ctrl-f to ensure no one else has already pointed out that typo.
Of course you can. Whether it’s wise to do so is an entirely different question :-D
Yep, been there, have a bruised ego to show for it.
I don’t think I would have minded as much if there would have been comments explaining why they thought I was wrong. It was the lack of response that bothered me.
(And what’s with this “You are trying to submit too fast”? I’m not allowed to post too many comments in a row?)
Yes. If I remember correctly, LW also implements some form of slow-banning (the amount of time required between your comments depends on your total karma), but I may be recalling a feature request as an implemented feature.
I thought it was caused by having a lot of recent posts downvoted.
From your post that you linked: “Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies.” I think that’s what I did on my FTL comment. (Incidentally, I had looked online and found several different versions of an experiment that said the same as I did in different ways, but the answers didn’t explain well enough for me).
I actually spent at least an hour reading through the comments on that AI post, and decided that the previous discussion wasn’t enough for my idea.
I’m not too good at anticipating which part of my arguments people will disagree with or not understand, so that may be why I don’t explain fully. I was hoping for a response that I could then see what’s missing and fill it in. It’s usually better explained in my head than I write down.
I read most of the posts offline in ebooks. That means I don’t see the comments unless I then go online and look. Is there a set of ebooks that includes comments? (For all I know, most of my ideas have already been said and refuted.)
And is he perfect?
I don’t know, but sounds like a good idea. Would be rather Talmudic in spirit. Unfortunately, most of the comments are fluff not worth reading, and separating the few percent that aren’t is not that easy. Maybe pick the threads with top 10 comments by karma or something.
Oh, far from it. I think that some of his statements are flat out wrong, but I only make this determination where either I have the relevant expertise or several experts disagree with him after considering his point in earnest.
Don’t many experts disagree with him on his MWI view on quantum mechanics?
Also note that replacing “Everett branches” with “possible worlds” works in 99% of the decision-theoretic arguments Eliezer makes, so there is no need to sweat MWI vs other interpretations. I would be more interested to hear your opinion on the Trolley problem, the Newcomb’s problem, and the Dust specks vs Torture issue. Assuming, of course, that you have studied it in some depth and went over the various arguments on both sides, the process you must be intimately familiar with if you have attended a yeshiva.
I’ve seen Newcomb and Dust specks vs Torture but not Trolley (although I’ve seen that one before in other places). Which sequences do I need to finish for those?
If the trolley one is the same as the “standard” version, then it’s fairly trivial within the framework of Orthodox Judaism (if I’m allowed to bring that in), because of strict rules about death. I’ll elaborate further when I’m up to the question. The other two are a lot more complicated for me.
Yes, the standard Trolley problem, sorry. For more LW-specific problems, consider Parfit’s hitchhiker.
Of course you are allowed to bring it in. And, unless you insist that it is the One True Way, as opposed to just one of many religious and moral frameworks, you probably will not be judged harshly. So, by all means!
So according to Orthodox Judaism, one is not allowed to (even indirectly) cause a death, even when the alternative is considered worse. The standard example is if you’re in a city and the “enemy” demands you hand over a specific person to be killed (unjustly), and says if you don’t do so, they will destroy the whole city and everyone will die (including that person). The rule in that situation is that you aren’t allowed to hand them over. Accepting that as an axiom, the trivial answer to the trolley situation is “don’t do anything”. Maintain the status quo. You cannot cause a death, even though it will save ten other people.
Parfit’s hitchhiker also appears trivial. It seems to assume I place no value on telling the truth. As I do, in fact, place a high utility on being truthful (based on Judaism) , my saying “Yes” will translate into a truthful expression on my face and I will get the ride.
Note: I got the link from searching for “midvar sheker tirchak”, which is the Bible’s verse that says not to lie, roughly translated as “distance yourself from falsehood.
On another topic, if I think that it is the “One True Way”, but don’t say that, is that OK?
Thank you, I appreciate your replies.
Hmm, I see. So, a clear and simple deontological rule. So, if you see your children being slaughtered in front of you, and all you need to do to save them and to kill the attacker is to press a button, you are not allowed to do it?
Also, does this mean that there cannot be Orthodox Jewish soldiers? If so, is this a recent development, given that ancient Hebrews fought and killed without a second thought? Or is there another reason why it was OK to kill your enemy in King David’s time, but not now?
Right, ethical systems which value honesty absolutely have no difficulty with this. But
is this a utilitarian calculation or an absolute injunction, like in the previous case, where you are not allowed to kill, no matter what? Or is there some threshold of (dis)utility above which lying is OK? If so, what price demanded by the selfish driver would surely cause a good Orthodox Jewish hitchhiker to attempt to lie?
First, note that I do not represent LW in any way and often misjudge the reaction of others. But my guess would be that simply stating this is not an issue, but explicitly using this belief in an argument may result in downvoting. This community is mildly hypocritical in this regard, as people who push their transhumanist views here as “the best/objective/universal morality” (I am exaggerating) can get away with it, but what can you do.
I may not have given enough detail. The prohibition against killing is specifically innocent people. There is a death penalty for many crimes, including murder (although not as far as EY seems to think. He once said that the Bible gives the death penalty for crossdressing. Evidence suggests otherwise. But that’s another topic.) So:
Assuming this attacker is the one killing or threatening to kill your kids, you are allowed to kill him (although you are supposed to try to injure them if killing isn’t necessary to stop them). You wouldn’t be allowed to kill someone else who is innocent, even to save many people.
I don’t know if you’re familiar with the current debate in Israel over the draft? It’s not really related, though. Again, the “ancient Hebrews” fights, were usually either to reclaim parts of Israel which belonged to them from the gentile nations that were inhabiting them, or to defend themselves against attackers. In both scenarios, the “victims” weren’t innocent. For some more info, see here, here, and here.
(By the way, I just saw this while looking up that last link, which (mostly) confirms what I said about the Trolley problem.
I realized after I posted that answer yesterday that I could conceive of a case that would work for me, in the spirit of the Parfit’s hitchhiker example. Namely, if I knew that when I got to town there would be someone who’s life I could save, but only with $100. (Also assuming that I’ve got only $100 cash total). That person’s life would take precedence over telling the truth, and I wouldn’t get the ride. There isn’t anything I could do in terms of prior obligation that would override the life concern of that person later.
OK, that makes more sense.
Seems like a flimsy excuse to slaughter babies. Though I suppose the Amalekite case can be somewhat justified by an uncharacteristically utilitarian calculation on God’s part if Amalekites presented an x-risk to Hebrews. But that is not how the issue is usually presented.
From your link:
...so they wiped out every woman and child? In any case, this inference seems like an extreme case of motivated cognition: “what we did was right, therefore they must have done something wrong even if we have no records of what they did”. Further reading of your links provides a fascinating insight into how far this motivated cognition can lead otherwise very smart people.
That it is indeed a case of motivated cognition can be trivially shown by transplanting the question into a modern setting and asking under which circumstances it would be ok to wipe out a whole people today. The answer is clearly “none” (I hope). Yet what (ostensibly) happened then has to be justified at any cost, or admit that Saul and Samuel were little better than Hitler and Pol Pot. Or that human ethics has evolved and what was acceptable back then is a high crime now.
Eh, I take back the unnecessarily emotionally charged reference to the iconic supervillains.
What happens if instead of “causing” a death, you’re doing something with some probability of causing a death? For instance, handing someone over to the enemy results in a 99% probability of them being killed by the enemy. What if it’s only 10%? What if the enemy isn’t going to kill him, but you need to drive through a war zone to give him the prisoner, and driving through the war zone results in a 10% chance of the person being killed? What if the enemy says that he’s going to kill one person from his jail no matter what, and he puts the person in the same jail (so that instead of 1 person being killed out of 9 in the jail, 1 person is killed out of a group of 10 that includes the new person, thus increasing the chance this specific person is killed, but not increasing the number of people killed)?
I think that a 99% probability would be the same as 100% for this purpose. A “doubt of death” is considered as strong as a definite death in general. In the war zone example, I think (with a little less confidence) a 10% would work the same. You simply don’t take into account the potential benefits, when weighed against an action that you must do that will cause a death. On the other hand, the person being requested is allowed to sacrifice their own life (or a 10% chance of doing so) to save others. I’ll have to think about your last case a little more.
What if you just need to do ordinary driving, where there’s a fraction of a percent chance of death?
If you couldn’t do things which had any chance at all of killing innocent people, then you wouldn’t be able to drive, or do to a lot of normal things. There’s probably some non-zero chance that the next time you turn on your computer it will trigger a circuit fault that causes the building to burn down an hour later.
I think there’s a point where the number is low enough that it can become insignificant, but I’m pretty sure it’s less than 10%. There’s a concept of what considered a “normal risk”.
Incidentally, since you mentioned it, there have been attempts by some Rabbis to ban driving for that reason. I’m unable to find a better source currently, but see: this. Some (current ones) have also suggested that one shouldn’t drive for pleasure, but only where there’s an actual need.
I thought about that your last case earlier, and decided it would also not be allowed. You need to consider each person separately. This person will have a 10% chance of being killed due to your action, which forbids it.
Part of the rationale for the rules (I think), is valuing each moment of life, so, for example, someone is considered a murderer if they kill someone who would die anyway in an hour. So causing the person to die earlier, is worse than letting them die later with everyone else.
Okay, here’s another question: Instead of being one person who drives and has a small chance of killing someone, you’re running a big company with a lot of drivers..
If two people drive, the chance of killing someone is about twice that of when one person drives. if a lot of people drive, the chance may add up to enough that it is over your threshhold for insignificant. So is it immoral to run a company that uses a lot of drivers, because statistically the chance of death over many drivers is too large, even though each individual driver is okay?
What if instead of running a company you’re collecting taxes, and collecting taxes costs some people some “moments of life” (since they have to work longer to pay the taxes)? Most people would say that this is okay because the taxes benefit society, but if you aren’t permitted to balance the loss to the individual against the gain to someone else, you can’t use that reasoning.
Or what if you’re running a country and you need to decide whether to have laws that put people in jail? Because of inevitable human error, you’ll be putting more than one innocent person in jail. (Even if you don’t know which person is the innocent one.) If you’re not willing to say “It’s okay to make innocent people lose some ‘moments of life’ as long as it helps others more”, how can you justify having jails?
Huh. Presumably they would also frown upon any similarly risky activity, like climbing, swimming or even living near Gaza borders, where one might get killed by a rocket.
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn’t count ^W^Wcounts ^Wdoesn’t count ^W^WScrew you Euathlos!
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn’t count ^W^Wcounts ^Wdoesn’t count ^W^WScrew you Euathlos!
See this about Gaza.
I don’t think climbing or swimming are as dangerous as driving. There is an obligation for a father to teach their son to swim, mentioned in the Talmud.
They’re a couple orders of magnitude riskier, actually. It’s tricky to make a direct comparison because the risk of driving is usually expressed over distance traveled, while sports is usually measured over number of sessions, but if we assume a typical day’s driving is about 50 miles (80 km), then we’re looking at 0.1 micromorts per session, as opposed to 17 for swimming or 3.1 for rock climbing.
(I’m not totally sure I trust that swimming estimate. The one for rock climbing aligns with my intuition, although there’s a lot of variance within the sport—bouldering is comparatively safe, while attempting the world’s highest peaks is absurdly risky by sports standards. I did know one guy who died in a shallow-water blackout and none who died climbing, for whatever that’s worth.)
[ETA: The estimate for swimming turns out to be bogus. See below.]
The link you gave puts car deaths above swimming in the second diagram. It doesn’t say that the sporting numbers are measured by session. (Except for the BASE jumping, hang-gliding, scuba diving, canoeing, or rock climbing). My own research (the first three links from Googling “risk of car accident death”) puts car accidents consistently higher than swimming deaths.
http://www.livescience.com/3780-odds-dying.html: 1-in-100 lifetime car death , 1-in-8,942 swimming death.
http://www.riskcomm.com/visualaids/riskscale/datasources.php: 1 in 17,625 one year car occupant death rate (based on 2002 data), 1 in 83,534 one year drowning death overall, 1 in 452,738 one year drowning death in swimming pool
http://well.blogs.nytimes.com/2007/10/31/how-scared-should-we-be/?_php=true&_type=blogs&_r=0: 1 in 84 lifetime car deaths, 1 in 1,134 swimming deaths.
I believe that’s because people drive much more than they swim, and the risk communication scale uses, say, your second numbers, and the comparison the link author gave converted that from annual to per-act.
I was trying to show that the swimming estimate wasn’t per session. 1 in 56,587 is close enough to 1 in 83,534 that they’re probably measuring the same thing, namely yearly deaths, in which case (assuming most swimmers swim more than 20 times a year, which I think is reasonable), the per-session risk for driving is more than that for swimming.
You’re right, it’s not per session—but it isn’t per year either. On closer examination it looks like they’re calculating the risk of death over the ten years surveyed (unless the 31 deaths reported are annualized, which I don’t think they are), which is an absolutely terrible bottom line—but fine, it makes the annual risk of death 1 in 566,000. I also notice that the population estimate is identical to that for running and cycling, so it’s probably some sort of very crude estimate of Germans involved in sports. Ugh. At least the climbing stats look more reliable.
Incidentally, an annual risk of death of 1 in 566,000 and a hundred sessions per year (two a week with time off for good behavior) gives us a per-act risk of 0.017 micromorts, about equal to driving four miles in a car.
It’s definitely not the chance of death in a year of swimming. My link already gives us all the numbers we need to calculate that—the number of deaths overall, the number of years being examined, and an estimate of the population involved—and it comes out to a chance of 1 in 5,658. (1,754,182 people / (31 deaths / 10 years).)
This conveniently lets us infer how they’re probably calculating the risk—it looks like they’re assuming one hundred sessions per year (or about two a week; fair enough) and doing a per-session estimate based on that. I also notice that the population estimate is identical to that for cycling and running, so it’s probably some sort of estimate of the number of people in Germany involved in an arbitrary popular sport. Cruder than I’d like, but I was only shooting for an estimate good to within an order of magnitude.
Those numbers look like general population numbers (and since it looks like a lot of drowning deaths are due to ineptitude, it seems unclear to me whether the yearly risk for frequent swimmers is higher or lower than for non-frequent swimmers). Instead of ‘all drowning,’ the 1 in 83,534 number, one should probably use the ‘in swimming pool’ number, which is 1 in 452,738.
I’m not sure I trust these estimates—or, rather, I don’t think I find them useful. The main problem is that the probabilities involved are all strongly conditional.
Consider swimming in a hotel swimming pool with a lifeguard watching and long-distance swimming alone in the ocean. Both are “swimming” but these two activities are radically different from the risk perspective. Similarly, you can do “climbing” in the climbing gym and you can do “climbing” in the Himalayas.
Sure, there’s a lot of variance involved. But there are more and less safe driving habits, too, and I’ll bet the variance is about as high. The point isn’t to demonstrate that one practice is under all conditions more or less safe than another, it’s to compare their average dangers as they’re actually practiced. And that clearly favors driving. It’s a profoundly bad idea to look at a set of statistics like this and say “oh, the ones that look inconvenient to me were probably doing something unsafe, they don’t count”.
On the other hand, these statistics don’t take health benefits from being physically active into account, which could potentially give ammunition for a much stronger critique—though given ike’s comments, I’m not sure it’d be a valid critique in the context of Jewish law.
I bet less. Yes, you can practice defensive driving, but if you’re on the road in the traffic there is only so much you can do to avoid the idiot who is both in a hurry and needs to send that text message right now. You don’t have much control over external factors. But in swimming you often do—it’s pretty hard to drown if you are swimming in a pool with others watching.
Yes. Therefore if you know you practice in way that’s different from the average, the probabilities change for you.
I wasn’t thinking about defensive driving, I was thinking of driving thirty miles over the limit while not wearing a seat belt and texting your girlfriend about the awesome fight you just saw in the pub.
In pretty much any activity you can asymptotically drive your chance of surviving towards zero if you set your mind to it :-/
If we are talking about variance, the lower safety bound is often in approximately the same place, but the upper safety bound (as well as the center of the distribution) varies.
I’ll bet there are more idiot drunks on the road than there are Himalayan mountaineers, even proportionally.
Yes, but if you’re going climbing you can choose to go the climbing gym and be absolutely safe from the avalanches in the Himalayas. However if you’re going driving on public roads, you cannot make yourself absolutely safe from drunk drivers.
You can make your climbing safer than you can make you driving.
That’s what makes climbing higher variance than driving.
You can make your climbing safer than summiting K2 would be, certainly. But enough safer to overcome those one and a half orders of magnitude of difference in the average? I haven’t actually seen any numbers on this, but that seems optimistic to me.
I’ll have to look at the methodology to believe that one and a half orders of magnitude, but regardless of that yes, you can make your climbing safer.
For example, you can do bouldering on technical routes which are all about agility and finger/arm strength. These routes rarely go more than 10 feet above thick mats—since you’re not belayed, you’re expected to just jump down when/if you run into trouble. Twist you ankle, sure, possible. Die—not very likely.
Yes, I mentioned bouldering in my original post.
I don’t think there’s a Lesswrong-specific take on the trolley problem, so I’m assuming shminux is just referring to the usual one.
Some high-profile physicists disagree, others agree. Very few believe in some sort of objective collapse these days, but some still do. This strange situation is possible because MWI is not a well-formed physical model but more of an inspirational ontological outlook.
Hi Avi, welcome to LessWrong!
There’s a big problem with upvotes and downvotes on LessWrong, namely that the two important but skew dimensions of agreement/disagreement and useful/disuseful for rating posts are collapsed into one feature. A downvote can feel like ‘Your comments are bad and you should feel bad (and leave and never post again)’, but this is often not the case.
Downvoting comments by a person asking why the parent comment was downvoted is generally poor form. In your case, it might be because you did it for a few comments in quick succession, which might have made Recent Comments (on the sidebar) less useable for someone so they downvoted the comments. To avoid this in future, maybe add a note in your comments when you post them noting that you are a new user trying to figure out how to tailor your comments to LessWrong and requesting that downvoters explain their downvotes to help you with this. On the other hand, it’s not impossible that someone was being Not Nice and mass-downvoting your comments, which wouldn’t be your fault.
Is “disuseful” a synonym for “unuseful” here or does it mean something else?
I’ll add a specific way for newbies to ask why a comment was downvoted without clogging up the recent comments list: edit the original, downvoted comment, appending a little “Edit: not sure why this was downvoted, could someone explain?”-type note. (It’s obvious once you think of it, but easy not to realize independently.)
It means something else. I use the dis- prefix to mean the active opposite of the thing to which it is prefixed. So ‘I diswant ice cream’ is a stronger statement than ‘I do not want ice cream’, though most people, whose language is less considered and precise, would (also) use the latter to cover the former. I guess some would say ‘I don’t particularly want ice cream’ to disambiguate somewhat.
Thanks for the suggestion.
Is that different enough from “harmful” to merit a less standard word?
I can see several possible connotations and policy suggestions underlying your comment, but not sure which one(s). Can you specify? Like, are you suggesting I update in this specific case or my general inclination to use nonstandard undefined terms or...?
I was thinking about this specific case, but now that I think about it it does generalize.
Minor point of information. In English “do not want” is not the negation of want. It actually means what you have defined “diswant” to mean. The “not” is privative here, not merely negative. People are not being less considered and precise when they use it this way. They are using the words precisely as everyone but you uses them—that is, precisely in accordance with what they mean.
You are welcome to invent a new language, just like English except that “not” always means simple negation and never means privation; but that language is not English. Neither, for that matter, would the corresponding modification of French be French. Comparing the morphology of translations of “want”, “do not want”, “have”, and “do not have” in a further selection of languages with Google Translate suggests that the range of languages for which this is the case is large.
That is indeed often the case, though I notice I feel hesitant to agree that this is always the case and retain a feeling that people use ‘do not want’ in both way, depending on the context. Regardless, when I said:
I meant (hohoho) this as a statement about my usage, not the common usage of others.
Thanks for pointing me to a further point of reference (the term ‘privative’).
Edit: I looked at the Wikipedia article for privative
It gives some examples:
and it says:
It seems like your usage of privative was excluding alpha privative, i.e. mere negation, but the examples and this summary sentence suggest ‘privative’ fails to distinguish (hohoho again) between mere negation and...the other thing. (Inversion? Opposition?) I’d be most amused if linguists had failed to coin a specific term for the subform of privation that is the ‘active opposite’ of something, and had only given a name (‘alpha privative’) to the subform of mere negation.
In the literal sense that I have considered these things more than they have, they are.
Localised examples like this seem trivial, but when generalised to encouraging good habits of thought and communication and precision, it’s not just a localised decision about ‘un-’ vs. ‘dis-’, but a more general decision about how one approaches thought, language, and communication.
Also, if you just look at ‘do not want’/‘diswant’ in a vacuum, then yes, it seems like both my usage and the common usage specify what they mean. But the broader question of using negation and ‘not’ in a way that cues the mental process of Thinking Like Logic is inextricable from specific uses of ‘not’. I generally lean towards the position that the upper echelons of a skill like Thinking Like Logic are only achieved by those who cut through to the skill in every motion, and that less comparmentalisation leads to better adoption of the skill. And I feel like it probably intersects with other skills and habits of thought. So trivial cases like this are part of a bigger picture.
I don’t think I understand what you mean by privative. Is it something like the difference between “na’e” and “to’e” in Lojban? For reference: {mi na’e djica} would mean “I other-than want”, and {mi to’e djica} would mean “I opposite-of want”.
That’s pretty much it. Privative “not” would be “to’e”. The English “not” covers both senses according to context, but “not want” is always privative and some lengthier phrase has to be used to express absence of wanting. Or not so lengthy, e.g. “meh”.
Oh, cool. I’ve found the distinction to be a very useful one to make.
Well [-l + come]; one of your comments was erroneous, as you said yourself (the one you retracted), another comment reads like a restatement of a popular comment predating yours by a over year (which you acknowledged yourself), and the third makes a pretty sweeping claim about superdeterminism not being Turing computable. Unfortunately, the proof you provide seems flawed on a couple of counts.* However, even if the proof did turn out to stand, people frown upon comments which do not give more explanations and context to sweeping statements that seemingly come out of thin air (even if they did turn out to be correct). FYI, I didn’t read (until now) or vote on any of your comments.
That makes 3 plausible downvote explanations for 3 comments, two of which you mentioned yourself. I’m surprised about your surprise.
* (Superdeterminism doesn’t require that part of the overall program can be perfectly predicted by a much smaller program in advance, nor that the outcome of the smaller program can then be used to change the overall outcome. At least two reasons: 1) Not being able to verify complete correspondence (except by fiat), given all hidden variables and their potentially unknowable context (unknowable from within the program, and the context may encompass the entire universe); 2) superdeterminism can in principle be saved simply by saying that the agent isn’t able to show a contradiction; i.o.w. in a superdeterminist universe, a perfect prediction-machine conditional on which a contradiction can be derived cannot exist, by definition of what “superdeterminism” means. Your thought experiment would be inapplicable in a superdeterminist universe, strange as it sounds. In that light, your proof reads similar to the one that shows that a Halting problem decider cannot exist. Alternatively, the agent would be unable to use the result to show a contradiction. While such an inability would indeed seem strange, from the universe’s point of view, every facet of that inability would have been predetermined anyways.)
You’re basically saying that superdeterminism doesn’t require Turing computability, not that it is in principle Turing computable. Anyway, my point was that superdeterminism predicts that we will never find a practical way to compute the observed answer to a simple quantum superposition, because that would imply that we could change it.
And I guess I did make a “sweeping claim”, but I was still annoyed that I just got down-voted without a reply. If I had a “sweeping claim” to discuss, how should I have posted it?
The AIbox one I had thought of before seeing that comment, and it’s (in my opinion) stronger than the other one. (And the replies to it didn’t apply to mine fully). As an aside, would I in general be expected to read all 300+ comments on a post before commenting?
See “give more explanations and context”. If you’re concerned with “never find a practical way”, that’s an entirely different discussion than “isn’t Turing-computable” (in this community, if something has a strictly technical interpretation, that’s what is defaulted to). Give enough context so that a reader knows what you’re concerned with (practical applications, apparently, see I wasn’t aware of that), instead of a somewhat theoretical sounding claim (which you apparently meant in a more practical way) with a proof that turns out to be wrong, given that strictly theoretical claim. Also, I was only pointing out shortcomings of your proof, to do so no stance regarding Turing computability is required. However, there is no reason to assume that superdeterminism would require incomputability, on the contrary, as long as the true determinist laws of physics are computable, the universe would be as well, no?
Well, at least the top level comments with a couple of upvotes, so you don’t repeat one of the main responses? That boils it down to 35-ish comments.
Oh. I need to be “strictly technical”? I’ll go back to the one about Turing computability and edit it to reflect a “strictly technical” comment.
Turing computability is a technical concept first. You don’t “need” to be strictly technical (obviously), but talking about Turing computability and giving a proof-by-contradiction kind of sends off the vibes of a technical/theoretical point, don’t you think? I was making an observation about how I interpreted your comment, and why, I wasn’t telling you what you need to write about.