What do you think about David Brin’s “disputation arenas?”
Maybe we could get a group of scientists to try out some form of disputation arena (Delphi Method for example) and see if they can be more effectively managed that way?
What do you think about David Brin’s “disputation arenas?”
Maybe we could get a group of scientists to try out some form of disputation arena (Delphi Method for example) and see if they can be more effectively managed that way?
I just finished my NaNoWriMo novel, Judge on a Boat (latest revision kept here), last month in November, and this month I’m going through the process of fixing it up and improving it. I described it on LessWrong yesterday.
Why this project? Well, I’ve been lurking on Less Wrong (and before that, Overcoming Bias) for years, and yet I recently realized that I’ve not been very rational in actual practice. So I decided to write a novel about rationality and moral philosophy, just to make sure that I managed to actually understand the topics well enough to put them in my own words. Hopefully the attempt to explain them to a lay audience will help my own understanding.
I’d like to get some help from others in the LW community, since I suspect the novel is not very well-written, and I need some ideas on how to improve it. Why should anyone help me? Well: two of the best recent rationalist fiction that I know of are Alicorn’s Luminosity and EY’s HPMoR. I am nowhere near those levels (for one, their characters are not flat). The only advantage I have is that my novel has (in current law, anyway) a slightly higher chance of being published, unless J.K.Rowling suddenly has an aneurysm and gives the copyright to the public domain, or if suddenly everyone listens to rms and start repealing copyright laws internationally: the novel is original and won’t get sued into oblivion if published.
My goals are… a bit iffy. I imagine publishing this in actual real-world physical book form, because those things are easier to give as gifts and might help raise the sanity waterline (badly needed in my family, and least they read books). But with the current level of quality I suspect I have about a snowball’s chance of passing unscathed through the sun.
Alternatively: how about an open-source novel? I could put it up into a CC-BY-SA and try to actively recruit people to help improve it, try to leverage the community, but that probably will make it difficult to publish physically, as legally speaking (IANAL) that would require contacting all the copyright owners. Maybe a fiduciary agreement a la FSF-Europe, but I know of no big, trustable entity that would act as a fiduciary for fiction.
“Speed is what distinguishes intelligence. No bird discovers how to fly: evolution used a trillion bird-years to ‘discover’ that—where merely hundreds of person-years sufficed.”—Marvin Minsky
“It’s frightening to think that you might not know something, but more frightening to think that, by and large, the world is run by people who have faith that they know exactly what is going on.”—Amos Tversky
I’m not sure about others, but while I initially felt that way (“Thank …. who?”) whenever something like that happened, careful thought-screening and imagining situations (i.e. simulation) helped weed it out. I’d be surprised if I slip something like that these days, unless it’s really really nasty.
Although it might be good to be aware that you shouldn’t remove a weapon from your mental arsenal just because it’s labeled “dark arts”. Sure, you should be one heck of a lot more reluctant to use them, but if you need to shut up and do the impossible really really badly, do so—just be aware that the consequences tend to be worse if you use them.
After all, the label “dark art” is itself an application of a Dark Art to persuade, deceive, or otherwise manipulate you against using those techniques. But of course this was not done lightly.
How about an expanded version: if we could be a timeless spaceless perfect observer of the universe(s), what evidence would we expect to see?
I proffer the following quotes rather than an entire article (I think the major problem with post-modernism isn’t irrationality, but verbosity. JUST LOOK AT YOURSELF):
“For the sake of sanity, use ET CETERA: When you say ‘Mary is a good girl!’ be aware that Mary is much more than ‘good’. Mary is ‘good’, nice, kind, et cetera, meaning she also has other characteristics.”—A.E. Van Vogt, World of Null-A
“For the sake of sanity, use QUOTATIONS: For instance ‘conscious’ and ‘unconscious’ mind are useful descriptive terms, but it has yet to be proved that the terms themselves accurately reflect the ‘process’ level of events. They are maps of a territory about which we can possibly never have exact information. Since Null-A training is for the individuals, the important thing is to be conscious of the ‘multiordinal’ -that is the many valued- meaning of the words one hears or speaks.”—A.E. Van Vogt, World of Null-A
Stripped to its essentials, every decision in life amounts to choosing which lottery ticket to buy. . . . Most organisms don’t buy lottery tickets, but they all choose between gambles every time their bodies can move in more than one way. They should be willing to ‘pay’ for information—in tissue, energy, and time—if the cost is lower than the expected payoff in food, safety, mating opportunities, and other resources, all ultimately valuated in the expected number of surviving offspring. In multicellular animals the information is gathered and translated into profitable decisions by the nervous system.
Steven Pinker
“Hold off on proposing solutions” is an important technique because the Human brain is lazy, and once it thinks of one solution, it will not try to look for another.
I’d say that the interface between the “centrifugal phase” and the “centripetal phase” implicitly reduces the explicit need to protect ideation using “hold off on proposing solutions”—sure, you can present the solution you thought about in the “centrifugal phase” immediately, but the solution gets pushed into the meat grinder of whatever “centripetal phase” there is, as it must compete against other solutions. Ideally, none of the solutions presented at the start of the centripetal phase will be designated as the “best” solution (hopefully, given the anonymizing effects of Delphi and the self-consistency pushed on you by writing your ideas in the NGT (nominal group technique)).
Even in brainstorm sessions, “hold off on proposing solutions” is needed only if the initial idea(s) presented are given undue weight compared to later ideas. Delphi causes the initial ideas to be mixed with the others—ideally, your summarizer will be given the expert’s answer sheets in random order, and in the real-time online form that’s the reason why the group qualitative answer is randomized. Ideally in an NGT the facilitator will steer everyone away from overly discussing one idea at the expense of the rest—it is noted there with an IMPORTANT scare tag, after all. For prediction markets, you don’t discuss ideas anyway, so that is not even an issue.
Because the article about it specifically mentions that this is the failure mode to avoid:
Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.”
So “hold off on proposing solutions” is just one possible solution. Deciding to take that solution immediately, without considering other options (such as NGT’s approach) is precisely falling into that same trap.
In short, hold off on proposing the solution of “hold off on proposing solutions”. v(^.^)v
edit:
Consider that under NGT, you are given 10 to 15 minutes to think of solutions before anyone gets to propose any solutions. That strikes me as longer than a typical “hold off”.
LessWrong is one way of implementing groups of rationalists thinking together. One might say that it provides a centripetal phase: the discussion forums. But what centrifugal phase exists that prevents groupthink? Yes, we have “hold off on proposing solutions”—but remember that no current rationalist is perfect, and LW may grow soon (indeed, spreading rationality may require growing LW).
Also remember that people—including LessWrong members—tend to favor status quos, and given a chance, people tend to defend status quos to the death.
At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.
It’s not highly formalized but that makes it a lot more flexible.
The Turing machine is highly formalized and is the most flexible possible computational machine. I get “false dichotomy” signals from this statement.
If you say you want groups of rationalists to solve problems together, which problems are you thinking about? What sort of problems do you want to solve?
insane governments, insane societies, insane individuals, and the singularity, in that rough order of priority.
I’m worried about the bits that are internal to a person, where people just have some common failure modes when trying to solve problems.
shrugs Well, seatbelts don’t stop accidents, but they do reduce the side effects of getting into one. While the disputation arenas do not directly prevent such internal failure modes, they help prevent that internal failure mode in a key influential person from spreading to the rest of the group. Yes, hold off on proposing solutions (don’t drink and drive). But also put some extra railing and padding so that others making a mistake do not necessarily get you into error either (seatbelts)
I don’t think you understand what I mean with the word highly formalized in this context. LessWrong has also a bunch of rules. Those rules are however made in a way where they don’t constrain the way one can use LessWrong as much as the rules of Delphi constrain it’s participants.
Okay, what exactly do you mean by “highly formalized”?
Constraints on behavior are not necessarily bad, in much the same way that there are more things in heaven and earth than are dreamt of in our philosophy: constraining things to a subset that can be shown to work can help. So I don’t really see “current LW has more freedom!!” as a significant advantage—because it might have more freedom to err. Of course, the probability of that being true is low—but can we at least try to show that?
After all, LW code is derived from Reddit. Of course, the online system is just part of the overarching system, and the system as a whole (including current community members) is different (there are more stringent rules for acceptance into the community here than on Reddit), but it might do well to consider that things may be made better.
At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.
No, if you propose an alternative it makes sense to explain how it would improve the status quo. Ignoring the status quo that provides a system that actually works in practice is a bad idea.
I said “de-emphasize”, not ignore. What I mean by “de-emphasize” is, acknowledge its existence, but treat it as an idea you have already thought about, i.e. keep it on hand and don’t forget about it, but don’t keep thinking about it at the expense of other, external ideas. In any case, I thought that it would be unnecessary to have to discuss the local status quo, since I would assume that members already know it.
Should I discuss the current status quo? I am not a regular member, despite reading OB before and LW for years, so I don’t feel qualified to get into its details. I mostly read the sequences and hardly look at discussion. Or even comments on the articles, anyway. So my knowledge of LW informal rules are minimal to say the least. Can you describe the status quo for me?
At the moment there no working Delphi system that allows rationalists to discuss solutions for handling insane governments. The cases where Delphi was used successfully are cases where it gets implented top-down. Whether the same approach works in an online community is up for discussion. I don’t know of a single case where such a system got enough users to work.
So should we, at this point, completely discard Delphi methods? How about NGT?
I suspect that it’s possible to modify LW’s polls to add some kind of Real-Time Delphi Method, as I mentioned in the article: (1) allow members to change their chosen options (2) require members to give a short justification for their chosen option (3) give randomized samples of justifications from other members. We can even have a flag that specifies normal forum polls or Delphi-style polls. But if the cost of making this modification is higher than the expected probability of that kind of Delphi being successful times the expected utility of that kind of Delphi methods in general for the rest of LW’s lifetime, then fine—let’s not do it.
If you think otherwise, please illustrate how you would tackle the issue you brought forward in your post with Prediction Markets. How to tackle it with Delphi would also be interesting.
I don’t know how to tackle it with Prediction Markets other than by futarchy: first vote on what measurements are to be used, then run a prediction market about whether particular policy decisions will improve or reduce those measurements. Insane governments are more sane if they have less corruption, better bureacratic efficiency blah blah—we may need to vote on that. Then we need to propose actual policy decisions and predict if they will lead to less corruption etc. or not. Unfortunately, I don’t understand enough of futarchy yet to make a proper judgment about it—it’s currently a mostly black box to me. I’m disturbed that futarchy_discuss appears to be defunct—I’m not sure if it’s because prediction markets have turned out to fail badly, or what.
Assuming those same measures can be agreed upon—less corruption, better bureacratic efficiency—then I suppose a Delphi Method can be made with “what policies should reduce corruption blah blah? How can we impose those policies from below? What feasible actions can we use to get those policies accepted?” as the questions.
(if you think that my definition of “insane government” isn’t very good, please understand that I live in a shitty little third-world country where the most troubling problems of the government is corruption and inefficiency, not whether or not the government should raise taxes)
I’m also not clear about why we need to find consensus on “insane governments, insane societies, insane individuals, and the singularity”.
Because I think lack of consensus is one reason why our kind can’t cooperate.
Can we at least try to pull together on this one?
The fact that I have an opinion about where the evidence as a whole leads does not prima facie make me impossible to argue with.
So you’re saying that if the evidence goes against you, you are going to stop being a Christian and self-identify as atheist (note that we do not capitalize that word)?
You did write a long post on different systems for discussion and you did ignore it in that post.
I thought it would be unnecessary, as I thought the people here would already know and it would be repetitive to do reiterate what is already known here. I’ll try to see if I can come up with some description of the local status quo, then, and edit the article to include it. I’m a little busy, Christmas is important in this country.
Within your list you didn’t discuss systems that have shown to work in the real world to solve the kind of issues that you want to solve.
Huh? These are techniques that have been studied with papers backing them (at least according to some very basic searches through Google). I have no idea how good those papers are, but maybe you do. Can you show some study specifically showing that Delphi works worse then typical internet forums?
take an online community like Wikipedia as an example.
Again, since LW also has a Wiki, I thought it would be superfluous to add it to the article too. I’ll find time to update it then.
If you however want to solve those kinds of problems in your country than you have to choose. One way would be to get the IWF to promote some Good Government program in your country in a top-down way. The other way involves finding supporters in your own country.
For both strategies I doubt that the LessWrong public is the right audience. Join/found some Liquid Feedback based political party in your country.
Thank you for this information.
One of the most effective calls for support to highly intelligent nerds was probably Julian Assange’s call that among other thing involved him telling the audience that they won’t get Christmas presents when they don’t cooperate. Julian Assange didn’t try to organise some vote to get consenus.
Okay.
shrug it’s best practice at a particular time and place, but is it the best practice at all times and places?
I’ll grant that the procedure “tell all participants: ‘hold off on proposing solutions’” is a good procedure in general, but is it the best procedure under all circumstances? How about enforcing the “hold off” part, rather than just saying it to participants? (cref. NGT’s silent idea generation).
On 18 December 2012 09:13:14PM, user “aronwall” replied “yes” to the question “So you’re saying that if the evidence goes against you, you are going to stop being a Christian and self-identify as atheist (note that we do not capitalize that word)?”. This comment is to ensure that user “aronwall” shall not be able to disavow this reply; please ignore it otherwise.
I suppose that works for pre-scientific, pre-rational thinking: back when you couldn’t do a thing about nature, but you could do a thing about that schmuck looking at you funny.
However, now, as humanity’s power grows, we can actually do something about nature: we can learn to predict earthquakes, build structures strong enough against calamity, vaccinate against pestilence, etc etc.
So the bias, I suppose, arises from evolution being too slow for human progress.
Hello Less Wrong,
My first comment ever. I have been lurking on Less Wrong for several years already (and on Overcoming Bias before there was even a Less Wrong site), and have been mostly cyber-stalking EY ever since I caught wind of his AI-Box exploits.
This year 2012, on a whim, I joined the NaNoWriMo (National Novel Writing Month) last November, and started writing a novel I had been randomly thinking of making, “Judge on a Boat”. The world is that humanity manages to grow up a little without blowing itself up, rationality techniques are taught regularly (a certain minimum level of knowledge in these techniques is required for all citizens), practical mind simulations and artificial intelligence are still far-off (but being actively worked on, somewhere way, way off in the background of the novel), and experts in morality and ethical systems, called “Judges”, are given the proper respect they deserve.
The premise is that a trainee Judge, Nicole Angel, visiting Earth for her final examinations (she’s from Mars Lagrange Point 1), gets marooned on a lifeboat with a small group of people. She is then forced to act as a full Judge (despite not actually passing the exams yet) for the people in the boat.
The other premise is that a new Judge, Emmanuel Verrens, is reading about Nicole Angel’s adventures in novel form, under the guidance of high-ranking Judge David Adams. Emmanuel’s thinking is remarkably similar to hers, despite her being a fictional character -
The novel was intended to be more about moral philosophy than strictly rationality, but as I was using Less Wrong as an ideas pump, it ended up being more about rationality, really. (^^)v
Anyway, if anyone is interested in the early draft text, see this.