Plasmon
This reminds me of the Abramelin operation, a ritual that supposedly summons guardian angels.
He estimated the sun was no more than 20 million years old, and presumably did not expect it to last for more than a few tens of millions of years more.
Fun fact : the Vulcan greeting originated in ancient Egypt.
This is good advice. Strive for real understanding rather than rote memorisation.
(is this too obvious to be worth mentioning? Probably. Unfortunately I have seen several doctoral students fail and in hindsight it appears to me that this was part of the cause of that failure.)
as it (MWI) requires uncountable worlds created in any finite instance of time
How is that any more problematic than doing physics with real or complex numbers in the first place?
See this comment
can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?
The basilisk is harmless. Eliezer knows this. The streisand effect was the intended consequence of the censor. The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It’s all a clever memetic inoculation program!
disclaimer : I don’t actually believe this.
you have strong enough belief that the submitter was wrong about her own mind, and her programmer boyfriend was right
No, no, certainly not, I made it clear that I was arguing in general and could not comment on the specific example given (come on, I say this twice in the post you quote).
that you’ll compare her to frauds and crackpots whose ideas have vanishingly small probability. Where do you get that probability mass from?
Let me repeat the argument she made
I am the one who has spent millions of minutes in this mind, able to directly experience what’s going on inside of it.
This sort of argument, “I have observed this phenomenon for far longer than you did, therefore I am vastly more likely to be right about this than you are”, is very vulnerable to confirmation bias (among other biases), where the speaker will more easily remember events that fit her hypothesis than events which didn’t. This argument is a stereotypical crackpot argument, I gave two examples but I can (alas) give many more. It is virtually never a good argument. Someone who is actually sitting on top of mountains of evidence for a certain hypothesis need not resort to this argument, they can just show the evidence!
How often have I seen crackpots use this argument? Dozens of times. How often have I seen non-crackpots use it? I recall only one occasion, two if you include the OP. How often have I seen people who have actually carefully collected lots of evidence use this argument? Never. (Is my memory on this subject susceptible to confirmation bias? Ha! Yes, of course it is.). Is it any wonder then, that my prior for “people who use this argument are crackpots” is somewhat large?
How is this relevant to the example given? We cannot expect everyone to continuously gather relatively unbiased evidence on their own behaviour, can we? Indeed we cannot. Then, we should also not be extremely confident in the models of ourselves which we have constructed. If someone challenges these models, what should we do?
Most likely, the person challenging our models does not actually have good evidence and is just attempting to make some status move. This is the most common and least interesting possibility, ignoring him / breaking up with him / telling him to stop doing it …. may all be good courses of action (yes, I disagree less with the OP than you may think)
If evidence is actually put forward (which it wasn’t in the OP example, but which I hope it would be on less wrong), you can provide evidence of your own “but in the past, when X happened, I did Y, which is compatible with my self-model but not with your model of me”. Ideally, the arguers should update after the exchange of evidence. (“I observed myself for millions of minutes” does not count as evidence exchange, since the other person already knew that)
I was not arguing about the specific example given in the OP, where he (the person with whom submitter B was arguing) was apparently unable or unwilling to provide evidence for his assertion that she was mistaken about herself. You, and submitter B, may be entirely correct about the person she was arguing with.
Perhaps I am overestimating the sanity of this place, but I do hope (and expect) that if similar arguments occur on this forum, evidence will (should) be put forward. In this place dedicated, among other things, to awareness of the many failure modes of the human brain, to how you (yes you. And I, too) may be totally wrong about so many things, in this place, the hypothesis “I may be mistaken about myself; I should listen to the other person’s evidence on this matter” is not a hypothesis that should be ignored. (note how submitter B does not consider this hypothesis in her example, and indeed she may have been correct to not consider it, but as stated I’m arguing in general here).
I am the one who has spent millions of minutes in this mind, able to directly experience what’s going on inside of it. They have spent, at this point, maybe a few hundred minutes observing it from the outside, yet they act like they’re experts.
The homeopath who has treated thousands of patients, should listen to the high-school chemistry student who has evidence that homeopathy doesn’t work. The physics crackpot who has worked on their theory of everything for decades should listen to the student of physics who points out that it fails to predict the results of an experiment. And the human, who has spent all their life as a human in a human body, should listen to the student of psychology, who may know many things about themselves that they are yet ignorant of.
Indeed I agree that it is possible, and probably desirable, to phrase the argument less bluntly than I did. However, it seems to me that submitter B is arguing against making such arguments at all, not arguing to make them in a more polite fashion.
Furthermore, here of all places, “If you (think you) posses evidence that I do not, show it and update me!” should be a background assumption, not something that needs to be put as a disclaimer on any potentially-controversial statement.
The human brain is fallible. That includes assertions made about internal experiences—such assertions may be wrong. If person A has reason X to believe that the result of person B’s introspection is wrong, which is the more respectful course of action?
person A : person B, your account of your internal experiences may be wrong because of X.
person A : meh, person B can’t handle the truth, I’ll just shut up and say nothing.
Men and women have the same average IQ
According to wikipedia, this is true by construction :
Most IQ tests are constructed so that there are no overall score differences between females and males
It seems “Men and women have the same average IQ” is a statement that gives information about how IQ tests are constructed, not about (the absence of) actual intelligence differences between man and woman.
Mentioning it would pattern-match to “science fiction” rather than “serious research” in the intended audience’s minds. It would cost them credibility.
They are very careful to not mention certain things in their video: nothing involving the computational nature of the human identity / “consciousness”. Nothing we would call “whole brain emulation”.
I have little doubt that what they choose to mention, and what they choose not to mention, in their video is driven more by signalling considerations than by what they consider to be technologically (in)feasible. Can they do what they claim? Create a model of the human brain detailed enough to obtain non-trivial information about brain diseases and computational processes used in the brain, but coarse enough to not have moral implications? Of course I do not know the answer to this question.
Kurzweil, who is widely agreed to be rather optimistic in his predictions, predicts
No she is not. Except in the sense that she is perhaps one small step in something that does fitness calculations, but looking at her brain you wouldn’t find fitness maximization calculations being done just execution of old adaptations.
These old adaptations encode rough heuristics of limited applicability which approximate fitness calculations (if the environment has been fairly constant for long enough). They are actually there inside the brain. What else do you think the cuteness response as you used here is?
Religious people believe they believe in God. And many of them are correct on this.
Are they? So very few of them actually take their beliefs seriously. So very few of them actually behave as if their expected utility calculations are dominated by treats of eternal damnation and promises of eternal salvation.
It can also be called division of labour. My comparative advantage may lie in bashing Wiggin heads or crafting arguments for why bashing Wiggin heads is good or organizing the logistics so our heads don’t get bashed by Wiggins so that we can bash more of theirs.
Yes. The problem is that this is exactly the rationalisation that someone would use if it weren’t true. Then again, it might be true.
We need to distinguish
(type A) Someone wants to rise in power within a certain group, advocates violence against a hated out-group, and remains largely protected from legal consequences himself because he doesn’t actually commit any violent acts. When asked, he claims his non-action is due to division-of-labour-reasoning.
(type B) Someone actually thinks violence against a certain out-group is a good thing (in the greater good sense), and doesn’t commit any violent acts himself based on division-of-labour-reasoning. When asked about his motivations, he is not (easily?) distinguishable from (A).
What’s the difference? The difference is that (type A) should be discouraged from encouraging violence. If a (type A) successfully encourages a group of followers to commit violence against a hated out-group , people get hurt. This was not the (type A)’s intention, it’s just an unfortunate side effect that he doesn’t really care about.
(type B)s, on the other hand, should be listened to, and their arguments weighed carefully. For the greater good, you know. In fact this seems like a good reason for (type B)s to signal that they themselves do not in any way profit from the violence.
What are your priors? More (type B)? More (type A)?
I don’t see from a consquentalist stand point what is so different between me pyhsically bashing a Wiggin head, pressing a button that activates a machine that bashes as Wiggin head and manipulating someone into bashing a Wiggin head.
You said it yourself : not being the one who actually commits the violent acts provides some legal protection. Your not ending up in jail is a consequence. (I don’t actually know what a Wiggin head is, I assume “bashing a Wiggin head” is some socially unaccepted form of violence).
Oh, I agree fully that such laws are problematic and open to abuse, and that it might well be better for no such laws to exist at all. Nonetheless they exist and should occur as a (possibly very low) cost in the calculation of the expected utility of advocating violence.
Only in a very specific sense of “exist”. Do hallucinations exist? That-which-is-being-hallucinated does not, but the mental phenomenon does exist.
One might in a similar vein interpret the question “do tulpas exist?” as “are there people who can deliberately run additional minds on their wetware and interact with these minds by means of a hallucinatory avatar?”. I would argue that the tulpa’s inability to do anything munchkiny is evidence against their existence even in this far weaker sense.