Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It’s true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it’s even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality—so there’s nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren’t, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).
Nevertheless, the fact remains that posts like this really aren’t, strictly speaking, on-topic for this blog.
I realize that it says “a community blog devoted to refining the art of human rationality” at the top of every page here, but it often seems that people here are interested in “a community blog for topics which people who are devoted to refining the art of human rationality are interested in,” which is not really in conflict at all with (what I presume is) LW’s mission of fostering the growth of a rationality community.
The alternative is that LWers who want to discuss “off-topic” issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.
(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual (“off-topic”) discussion of rationality.)
While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.
I realize that it says “a community blog devoted to refining the art of human rationality” at the top of every page here, but it often seems that people here are interested in “a community blog for topics which people who are devoted to refining the art of human rationality are interested in,” which is not really in conflict at all with (what I presume is) LW’s mission of fostering the growth of a rationality community.
I’ve seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.
A certain amount of that sort of thing is ok, but if there’s too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.
If there’s the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.
(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual (“off-topic”) discussion of rationality.)
Better yet, we could call them Overcoming Bias and Less Wrong, respectively.
If you stick around, you will. I have a −15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)
Also, somebody should probably go ahead and state what is clear from the
voting patterns on posts like this, in addition to being implicit in e.g. the
About Less Wrong page: this is not really the place for people to present their
ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence
or futurism per se.
What about the strategy of “refining the art of human rationality” by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn’t that count as “refining”?
Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It’s true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it’s even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality—so there’s nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren’t, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).
I realize that it says “a community blog devoted to refining the art of human rationality” at the top of every page here, but it often seems that people here are interested in “a community blog for topics which people who are devoted to refining the art of human rationality are interested in,” which is not really in conflict at all with (what I presume is) LW’s mission of fostering the growth of a rationality community.
The alternative is that LWers who want to discuss “off-topic” issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.
(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual (“off-topic”) discussion of rationality.)
While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.
I’ve seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.
A certain amount of that sort of thing is ok, but if there’s too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.
If there’s the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.
Better yet, we could call them Overcoming Bias and Less Wrong, respectively.
point well taken.
I thought it was an interesting thought experiment and relates to that alien message. Not a “this is how we should do FAI”.
But if ever get positive karma again, at least now I know the unwritten rules.
If you stick around, you will. I have a −15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)
What about the strategy of “refining the art of human rationality” by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn’t that count as “refining”?