I first began to separate the concept of truth-seeking from specific arguments of fact late in life, as a teenage catholic who was given a copy of The Case Against God.
outlawpoet
I favor a lot of posting and commenting, at least initially. It’s not clear to me what kinds of ideas and communication is going to be promoted by this community, and I think a wide variety of possible things for reader/commenter/providers to latch onto provides the most possibility of something interesting coming out of this.
As other commenters have said, I imagine people will lose enthusiasm or run out of ideas eventually anyway, and we’ll settle into a steadier state of posts/comments.
Having differing updating speeds for different pages is a good idea.
Why not just vote the topic up, and comment what you like? The score on the topic or comment will be high, even if there aren’t a lot of people saying “you rock” in the comments.
Isn’t that the same signal?
so, lie?
I always thought the Ixian and Tleilaxu(who, it should be noted, can clone unlimited copies of the most powerful mentats they could find samples of) would have done much better in a fair Dune universe.
One thing I’ve never seen in these threads about rationalist literature is RPG handbooks. The 2nd Edition Dungeon Master’s Guide had an enormous influence on me, because it suggested that the world ran on understandable, deterministic rules, which could be applied both to explicate dramatic situations, and to predict the outcome of situations not yet seen.
One of the first things I ever did (I lacked friends to play D&D with) was to assign stats to fictional characters and make pre-existing stories I felt were unsatisfying play out in a more “realistic” manner. A better word would be internally consistent. But I felt very strongly after that point that it was logical to expect that 9 times out of 10 that the entity with the most advantages would come out on top, contrary to the manner of stories, although the dice-rolling kept total predestination at bay.
Is it possible to do some processing of posts and comments to automagically add links to the wiki for technical terms(possibly any word or phrase with it’s own page?).
I’m thinking of the annoying ad-word javascript that some sites do. I’ve always thought it would be useful to do that linking without the author needing to(but possibly being able to override), but most wikis require you to make links manually, because of ambiguity. Given the specialist nature of this wiki, shouldn’t that be less of a problem?
This raises the question of what positive attributes we can attempt to apply to this little sub-culture of aspiring rationalists. Shared goals? Collaborative action?
Some have already been implying heavily that rationality implies certain actions in the situation most of us find ourselves in, does it make sense to move forward with that?
Is success here just enabling the growth of strong rationalist individuals, who go forth and succeed in whatever they choose to do, or to shape a community, valuing rationality, which accomplishes things?
would that mean that on default settings, a post or comment would be invisible until someone voted for it? Should I set my filters for −1?
One of my previous co-workers ran a San Diego chapter. He enjoyed it a great deal, but that may have been because he was in charge, and shaping the meetings and context towards what he was interested in.
Lots and lots of fairly loose speculation on topics outside their specialties, lots of puzzles and mind-games. It wasn’t really very fun for me, although the gender ratio was better than I expected.
Playa del Rey, by the beach just south of Santa Monica and West of LA proper.
I agree with this comment vociferously.
The upper bound isn’t a terrible idea, but it would, for example, knock E.T. Jaynes out of the running as a desirable rationality instructor, as the only unrelated competent activity I can find for him is the Jaynes-Cumming Model of atomic evolution, which I have absolutely zero knowledge of.
Not something I was aware of, but good to know.
I wasn’t aware of anything from before his career as an academic, 1982-onward. His wikipedia article doesn’t mention anything but the atom thing. But he certainly set out to be a Professor of rationality-topics.
Handle: outlawpoet
Name: Justin Corwin
Location: Playa del Rey California
Age: 27
Gender: Male
Education: autodidact
Job: researcher/developer for Adaptive AI, internal title: AI Psychologist
Working in AI, cognitive science and decision theory are of professional interest to me. This community is interesting to me mostly out of bafflement. It’s not clear to me exactly what the Point of it is.
I can understand the desire for a place to talk about such things, and a gathering point for folks with similar opinions about them, but the directionality implied in the effort taken to make Less Wrong what it is escapes me. Social mechanisms like karma help weed out socially miscued or incompatible communications, they aren’t well suited for settling questions of fact. The culture may be fact-based, but this certainly isn’t an academic or scientific community, it’s mechanisms have nothing to do with data management, experiment, or documentation.
The community isn’t going to make any money(unless it changes) and is unlikely to do more than give budding rationalists social feedback(mostly from other budding rationalists). It potentially is a distribution mechanism for rationalist essays from pre-existing experts, but Overcoming Bias is already that.
It’s interesting content, no doubt. But that just makes me more curious about goals. The founders and participants in LessWrong don’t strike me as likely to have invested so much time and effort, so much specific time and effort getting it to be the way it is, unless there were some long-term payoff. I suppose I’m following along at this point, hoping to figure that out.
It’s fairly straightforward to max out your subjective happiness with drugs today, why wait?
Well, that’s an interesting question. If you wanted to just feel maximum happiness in a something like your own mind, you could take the strongest dopamine and norepinephrin reuptake inhibitors you could find.
If you didn’t care about your current state, you could get creative, opioids to get everything else out of the way, psychostimulants, deliriants. I would need to think about it, I don’t think anyone has ever really worked out all the interactions. It would be easy to achieve a extremely high bliss, but some interactions work would be required to figure out something like a theoretical maximum.
The primary thing in the way is the fact that even if you could find a way to prevent physical dependency, the subject would be hopelessly psychologically addicted, unable to function afterwards. You’d need to stably keep them there for the rest of their life expectancy, you couldn’t expect them to take any actions or move in and out of it.
Depending on the implementation, I would expect wireheading to be much the same. Low levels of stimulation could potentially be controlled, but using to get maximum pleasure would permanently destroy the person. Our architecture isn’t built for it.
It depends on what you mean by wrecking. Morphine, for example, is pretty safe. You can take it in useful, increasing amounts for a long time. You just can’t ever stop using it after a certain point, or your brain will collapse on itself.
This might be a consequence of the bluntness of our chemical instruments, but I don’t think so. We now have much more complicated drugs that blunt and control physical withdrawal and dependence, like Subutex and so forth, but the recidivism and addiction numbers are still bad. Directly messing with your reward mechanisms just doesn’t leave you a functioning brain afterward, and I doubt wireheading of any sophistication will either.
I have attempted explicit EU calculations in the past, and have had to make very troubling assumptions and unit approximations, which has limited my further experimentation.
I would be very interested in seeing concrete examples and calculation rules in plausible situations.
In a manner which matches the fortuity, if not the consequence, of Archimedes’ bath and Newton’s apple, the [3.6 million year old] fossil footprints were eventually noticed one evening in September 1976 by the paleontologist Andrew Hill, who fell while avoiding a ball of elephant dung hurled at him by the ecologist David Western.
~John Reader, Missing Links: The Hunt for Earliest Man
A way to see the number of comments a particular post has would be useful