Then gender is identical to identity; you’ve ruined gender’s identity.
BaconServ
Reflecting back on LessWrong’s past, I’ve noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled “Applying Bayes Theorem.” Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
I think the term “vortex” is apt simply because it demonstrate you’re aware it sounds silly, but in a world where intent is more readily apparent, I would simply just use the standard term: Soul. (Bearing in mind that there are mortal as well as immortal models of the soul. (Although, if the soul does resemble a vortex, then it may be well possible that it keeps spinning in absence of the initial physical cause. Perhaps some form of “excitation in the quantum soul field” that can only be destroyed by meeting a “particle” (identity/soul, in this case) of the perfect waveform necessary to cancel it out.))
As in my previous comment, if the soul exists, then we will need to discover that as a matter of researching physical preservation/cryonics. Then the debate begins anew about whether or not we’ve discovered all the parts we need to affirm that the simulation is the same thing as the natural physical expression.
Personally, I am more a fan of Eliezer_Yudkowsky’s active continuing process interpretation. I think the identity arises from the process itself, rather than any specific momentary configuration. If I can find no difference between the digital and the physical versions of myself, I won’t be able to assume there are any.
If you can correct your beliefs by thinking up a good argument against them, isn’t that a good thing? I’m unsure why you’re terming it “warning.”
- 15 Oct 2013 20:17 UTC; 1 point) 's comment on Making Fun of Things is Easy by (
I would, if and only if it could be expressed clearly and in strictly rational terms.
This is telling and frightening. Do you earnestly believe the entirety of half a nation agrees with you?
I’m sorry, but creating subreddits is too trivial a task that would bootstrap this specific advancement to overlook. The only way to offset this oversight is if the administrators were trying to perform some kind of “test” to see if the community can work around the problem, but that’s really stretching it. I fault the entire system regardless. I suppose I don’t disagree that it is somewhat uncharitable, but the advancements that have been made aren’t …
Looking over your submission history, I can see what’s happening here. You are advancing and improving, and writing posts about it, with those posts being received well, but the reception is far from effective. There are any number of psychological tendencies in place to cause you to inaccurately project your own advancements onto your peers. The truth is Eliezer_Yudkowsky has already embedded a ton of these lessons in the sequences over and over again. You’re stating them more formally and circling the deeper ubiquitous causes of specific individual opinions here and there, but you’ve yet to make the post that resonates with the community and starts breaking some of the heavier cognitive barriers in place whose side-effects you’ve been formalizing.
It’s all well and good, you’re doing well, and your effort is paying off, and the community is advancing. Some of us are just getting really impatient with how slowly LessWrong refines itself in the immediate presence of so much rationality optimizing knowledge.
I honestly expected my comments back here three years in the past to go unnoticed for some time. That people still pay attention to these events is surprising. That you took the time to reply was surprising, and while I recognized your name as the author of one of the recent LessWrong-advancing posts, I didn’t properly think of the full implications until now. As long as you’re paying attention across time, I might as well point out to you that nobody else is. I was going to focus on getting this article bumped tomorrow, but if you are already here now, I might as well simply suggest you start thinking about an article about visiting the past posts of LessWrong.
I had this problem a lot growing up. It’s significantly lessened now, but just the other day I realized I was taking on too many responsibilities because I was stressed about one specific project and was distracting myself from it with other projects that I also procrastinated on. I suppose I was working myself into a position where I had too many projects that any one person could ask me to do any specific one of them; I have all the others to work on. I was able to get myself out of this hole by working on the project that was the main source of the stress.
Prior to this, growing up, there was only one thought that made me agree to anything: “I want to please everyone.”
What snapped me out of this was one of the most maturing events in my life: I was told that not doing something for someone might disappoint them, but saying I would and then not doing it would be even more disappointing. That is, the honesty on the matter is more valuable than the emotional response of your agreeing to do it. I mean yes, it’s obvious now, but to that kid that wanted to please everyone all the time, it made sense to take on tasks that I really had no actual intent of accomplishing. Convincing yourself that making the promise before making the intent to carry out the promise will result in you making the intent to carry out the promise is all too easy. It’s a lot more difficult to commit post-promise-statement than you might think.
I feel that the policy you state is read once and ignored for whatever reason. A mere reminder on an individual basis seems unlikely to effectively address this issue: There are too many humble users who feel their mere vote is irrelevant and would cause undue bias.
I feel that this entire topic is one of critical importance because a failure to communicate on the part of rationalists is a failure to refine the art of rationality itself. While we want to foster discussion, we don’t want to become a raving mass not worth the effort of interacting with (à la reddit). If we are who we claim to be, that is, if we consider ourselves rationalists and consider the art of rationality worth practicing at all, then I would task any rationalist with participating to the best of their ability in these comments: This is an importation discussion we cannot afford to allow to pass by into obscurity.
I have encountered a severely limited ability in others to accurately understand that, when speaking on behalf of others, you are not speaking your own opinion. I recommend trying to be as explicit as possible in explaining public perception.
As much as I might try to find holes in the analogy, I still insisted I ought to upvote your comment, because frankly, it had to be said.
In trying to find those holes, I actually came to agree with your analogy well: The story is recreated in the mind/brain by each individual reader, and does not necessarily depend on the format. In the same way, if consciousness has a physical presence that it lacked in a simulation, then we will need to account for and simulate that as well. It may even eventually be possible to design and experiment to show that the raw mechanism of consciousness and its simulation were the same thing. Barring any possibility of simulation of perception, we can think of our minds as books to be read my a massive biologically-resembling brain that would retain such a mechanism, allowing the full re-creation of our conscious in that brain from a state of initially being a simulation that it reads. I have to say, once I’m aware I’m a simulation, I’m not terribly concerned about transferring to different mediums of simulation.
- 18 Oct 2013 20:13 UTC; 8 points) 's comment on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques by (
This seems like an attempt to clear up inferential silence, so I’m reluctant to downvote it, but I would still like to voice a disapproval of the reasoning you use to justify your downvoting. I can explain further at your request.
Each of those 7 billion will be at 7e-9 equivalently; regardless of how much it is in comparison to the sum of all of them, each value is equal.
It helps to remind yourself that the silence strongly indicates that everyone is extending the courtesy of allowing him to have wrong opinions. If someone reinforces with, “Damn straight!” it sends a different signal entirely. Often times, the best you can do it politely signal that you’d rather talk about something else, strongly implying they have said something offensive. People tend to pick up on that on some level.
I find your third point for practical advice to be significantly dis-charitable to someone of average intelligence. There are people that miss obvious patterns like, “This person gives bad advice,” but I think people of average intellect are already well equipped to notice simple patterns like that.
I don’t believe this is a coherent set of general advice that can be given here. What specific details and methods of rationality any given “average” person is missing, and what specific cognitive biases they suffer from most severely will vary too widely to get good coverage with a few short points. My approach would be to work on an individual basis to determine what’s causing the most problems for each person and address them accordingly. This may seem highly inefficient, but remember that success stories are told and retold virally as each new person has experiences that confirm the wisdom:
That sounds a lot like what I went through. What really helped me was...
There are far too many average people for me to expect a single centralized fault will be considerably effective.
What makes you think most LessWrongers have thought about it to a degree to which the issue can be considered in the process of being solved? (For whatever needs to be done to “solve” it, whether that is “Do nothing different” or not.)
Shouldn’t matter, I don’t assign high weight to amateur probabilities. I believe bokov’s argument is that this threat should be taken seriously purely on the grounds that we take much more theoretical of dangers seriously. Do we only take the hypotheticals seriously? If so, this is a serious oversight.
Progress, yes, but I’m not seeing anything quite on the level of the call to action presented here. The argument isn’t that LessWrong isn’t useful, but that it is operating without the recursive return on its investments that would benefit it so much more than the current (slowly advancing) practices.
It is important to realize that the moment you sign up for cryonics and the advancements made thus far are not the ones that you are likely to be subject to. While current practices may well fail to preserve usable identity-related information (regardless of experts opinions on whether or not this is happening (if it were known now, we could know it)), advancements and research continue to be made. It is not in your best interests to be preserved as soon as possible, but it is in your best interests to sign up as soon as possible, to ensure eventual preservation. Too often I see people basing their now-decisions on the now-technology, rather than the now-measured rate of advancement of the technology. The condition of, “If I’m going to be dieing soon,” is simply not likely enough that most of us should be implicitly considering it as a premise.