[Question] What Are Your Preferences Regarding The FLI Letter?

There’s a big issue and it doesn’t seem to having very much voting happening around it, and it has less really high quality voting, which might be sad, given the importance and stakes and so on?

I think governance innovations that are inclusive of digital people will be important to getting a Win Condition with technology.

Voting can also help aggregate knowledge. If a lot of people make a best effort attempt to estimate a number, or a “right way to proceed” then, if there is some way to average and find the “most central answer” that might be the objectively best one. As was written:

When you hear that a classroom gave an average estimate of 871 beans for a jar that contained 850 beans, and that only one individual student did better than the crowd, the astounding notion is not that the crowd can be more accurate than the individual. The astounding notion is that human beings are unbiased estimators of beans in a jar, having no significant directional error on the problem, yet with large variance. It implies that we tend to get the answer wrong but there’s no systematic reason why. It requires that there be lots of errors that vary from individual to individual—and this is reliably true, enough so to keep most individuals from guessing the jar correctly. And yet there are no directional errors that everyone makes, or if there are, they cancel out very precisely in the average case, despite the large individual variations. Which is just plain odd. I find myself somewhat suspicious of the claim, and wonder whether other experiments that found less amazing accuracy were not as popularly reported.

It seems likely to me that there is just some correct answer to the question of whether there should be an AGI moratorium, and if there should be, then what should the details look like to get the best outcome for the world? Yet there is a lack of disagreement on the subject! Maybe it would help to solicit many estimates of the best ways to proceed, and then average?

Somewhat separately, anarchism might be… a sub-optimal way for people to organize themselves. Maybe not. But probably.

There’s a very common historical pattern. Where people form themselves into a government, and then they apply locally coherent consent ethics to a generalized problem of legitimating the actions of people in a government that not necessarily everyone agrees with.

From this generalization the consent of the governed, falls out somewhat naturally… now presumably serial killers would prefer to not give “consent” to be put in cages, and thereby avoid prison and so on, but in the general case there are PATTERNS of normative regulation that get general consent, which might be seen to ethically justify certain cases where consent is not offered?

The theory is nice, but the details can be quite hard to hammer out such that “good enough” voting details are produced!

You might think that pure local consent-based negotiations could generate a fully adequate society, but it seems empirically that this does not lead to generally globally adequate outcomes, but then voting often takes a long time and can go wrong in various ways.

And yetPublic goods can in fact be quite lovely, and yet also they can be expensive to notice-are-needful and plan-to-create and build and maintain.

By definition (from the very concept of a “public good”… that you can’t deprive someone of the benefits after it has been created) the people who create public goods are game-theoretically unlikely to get paid by “free riders”. Many volunteer public goods providers are unlikely to even make up their costs in many cases, except via donations that have nearly no relation to the details of either the costs or even the benefits! This leads to underproduction of public goods by default, which is sad!

This QUESTION is an attempt to solicit ANSWERS in a pre-formatted way.

One benefit of setting up a best-effort best-possible poll is that it might discover shared communal average assessment of “what would actually be good for us” via the wisdom of the crowds.

Another benefit might be to hearten volunteers who are willing to work on the public good of avoiding the death of literally everyone on Earth, if such mass death is likely without some public-spirited efforts, and if human efforts can avert such tragedy.

Another benefit might be to make “what smart people will vote for” legible to bureaucrats around the world, who might read sites like LessWrong when things get confusing, so they can figure out what a semi-organized herd of epistemology geeks thinks might actually be true about what might be good, so that the readers, some weeks or months later, can successfully pose in ways that make it look like they always believed what smart people seem to be forming a consensus around.

With justifications and motivations out of the way, I shall proceed to methods and logistics and questions of “what should the world do about AGI?”

Attempting To Vote Well On FLI/​Moratorium Stuff

One innovation in governance (that the US might not have simply because of an accident of history where Ben Franklin and Condorcet only communicated a tiny bit?) which already exists at a theoretical level, and that I see everyone failing to automatically reach for, is “preference balloting”.

This is a legitimately tricky thing, because the User Interface for preference ballots is rather tricky, and good ones are not easily available.

I am hacking together a bad implementation (somewhat inefficient) at an unusually good voting system, and hoping for incremental improvements on this in the future.

Part of the experiment will be to repurpose some of Lesswrong’s existing and highly general commenting infrastructure!

I see there as being two proposals “on the floor” in the AGI Policy Zeitgeist right now and the great thing about preference ballots is that they begin to let someone “average over preference lists” instead of just averaging over a single number, like a jellybean count in a jar.

(There are technically more than two proposals because there is always an implicit “zeroth option” of “the status quo default trajectory that you get from no group coordination at all”… and in this case it might also be reasonable to interpolate between the official options.)

The Basic Options

There does seem to be a “robustness difference” to the different proposals, which is how I arrived at this numerical ordering (from the least robustly coordinated effort to the most):

0StQ) The Status Quo Default Trajectory

Implicitly: Discussion should be TABLED. Also in general, regarding the world, private firms (and maybe some military actors in secret) across the planet engaged in a commercial (or noncommercial) arms races around AGI totally OK, if they are even happening. Such actors will have enough safety or ethics or coherent attempts to definitely prevent literally everyone from dying sprinkled on top of each of their individual efforts. They will do this according to their private estimates of private benefits, to avoid bad PR, and be in accord with their own conscience, and so on. Either that is adequate, or else nothing else could be adequate, or at least the time for any decisive actions or talk of such is not now.

2FLI) The Future of Life Institute’s Letter

They propose a six month moratorium, with some teeth, but no clear institutional plan for implementing it, or mention of the positive or negative incentives that any particular implementing institution would use to back the plan. The moratorium would have a mixture of goals that include fighting misinformation on the internet as well as preventing literally every human person and biological animal on Earth from literally being killed as basically co-equal goals.

4EY|) Eliezer Yudkowsky’s Letter

He proposes a comprehensive permanent reboot of the international treaty landscape to prevent the literal killing of literally every human person and biological animal on Earth by a lab-escaped or foolishly-released unaligned AGI. Explicitly: since nukes can’t do this, but unaligned AGI might, it would suggest that the international reboot would need to prioritize AGI proliferation policy over nuclear proliferation policies. If the real stakes in policy area A are just bigger than in policy area B, then B should be subordinated to A, because: duh. Nuclear war would still leave lots of humans alive in lots of places, whereas an unaligned transformative AGI would almost certainly perform a pivotal act of human dis-empowerment and then dismantle the entire planet (humans, animals, and all) at its leisure… for the atoms and energy and tidier planning landscape.

But Wait There’s More (In Between)

Using interpolation, we can identify other options as well, and if one of those options was the best one, then it hopefully a well designed popularity contest could have good effects on global discourse and planning, such as by pointing out that the existing options need more discussion.

1S?F) Between the Status quo And FLI’s Letter

This would be the thing to vote for if you think that the FLI proposal is too crazy, and too aggressive, and would cost too much to too many people in desperate need of good inventions (like urgent medical inventions for people who are currently sick or dying that good AGI might bring very soon?) but then maybe, by talking things out a more toned down and reasonable letter could probably be found in a short amount of time.

If you vote here you probably think that the “literally everyone dying” thing is wildly overblown. Also you might think that “we don’t need international treaties” and you almost certainly don’t think that “this is more important that nuclear policy”.

3F?E) Between FLI’s Letter and Eliezer’s Letter

Vote for this if you think there are probably a lot of things more robust that FLI’s proposals and less robust than Eliezer’s proposals that might buy a large amount of expected survival for small costs, and that more talking could find it.

Like maybe it would be good for FLI’s authors and Eliezer to talk to each other until they are either in Aumann agreement on facts and predictions but have boiled it down to a value difference (like maybe Eliezer cares about the future of human life more than FLI?), or maybe one side could write the other off as not tracking reality, or maybe as not tracking the ever important tactical consideration of looking very prestigious and trustworthy and relaxed and cool while secretly taking a huge emergency very seriously?

After talking they could emit a combined and mutually endorsed resolution, or at least two new resolutions that use more of the same simple concepts and speak to each other somehow? Or you know, whatever. I’m not them. I don’t know if they have talked it out already. If you know that they HAVE talked it out already, and the two existing letters are in fact the only two coherent policy ideas, then: don’t rank option 3 high!

A lot of people can probably project their preferences into this space, and yet they might not actually ever get to consensus on post-voting discussion. I would ask for people who vote for this as your most preferred option to go look at the details of the two proposals and generate your own list of various possible averages or mixture, and talk in public (maybe in the comment area on this post) about where exactly inside this zone of policy options you think the best policy is, and why.

How To Run The Voting

There are 5 factorial different ways a person could think about five choices. That’s a lot! It is 5*4*3*2*1 different ways that people could have a preference ordering over these proposals!

Also, I’m pretty sure there is no such thing as a human person who can look at all 120 answers and, in less than five seconds, instantly pick one, and then coherently explain out of working memory why the other 119 options are worse according to some stable set of concerns. You can’t get to your bottom line that fast, using a mere human brain! You have to “think step by step” if you want anything other than rationalizations of your bottom line.

There are too many “mental moving parts” and the working memory of humans is very puny!

However, if a bunch of people do it, maybe the mistakes will cancel out, and the “average” will be pretty darn wise? That is the theory I’d like to test here.

Interestingly… in reading many people’s opinions of “why I am signing FLI’s letter” or “why I am not signing FLI’s letter” a lot of people seem to be unsure about it for many reasons other than just what the best policy might be.

There is also a sense, maybe that “the letter is basically right, but it has some wrong details, and maybe I’m never going to get another way to contribute to a consensus again by any OTHER way, so maybe I shouldn’t let ‘perfect’ be the enemy of ‘good’ or shouldn’t let ‘good’ be the enemy of ‘at all’ and so I can sign it… but also if anyone ever sees my name on this I’ll always be able to say that of course some of it was stupid, and some of it was not stupid, but it was the only thing around for an up/​down vote”.

The whole thing is very sad. Preference ballots are a way to fix this expressive block!

That is to say, hopefully, in this experiment, people will find it easier to actually figure out what they think the BEST thing looks like, and then the 2ND BEST, and then the 3RD BEST and so on, and each small decision is easier to make because what it means is easier to reason about?

Let’s see how that might work...

How To CHOOSE A Single Preference Ballot Option

I think it might really be EASIER for people to pick their favorite of five options, by making other four options very clear contrast objects.

Thinking about “your favorite of five options” is something people can do at a restaurant! The key part becomes finding decisive features and weighing them properly, basically, but now you have more “prompts” and “contrasts” to help the features of the choices that are actually in front of you! Do you want something salty (there are several) or sweet (there are several) or both or neither?

Once you’ve figured out the best thing (ie your most preferred option)… imagine that the waiter says that there was a supply problem, and that menu item isn’t actually available right now.

It should be easier to pick your second favorite thing from among even fewer options!

So with five options (Nothing, weaker-than-FLI, FLI’s proposal, mixture-of-FLI-and-Eliezer, and Eliezer’s proposal) you only have four decisions to make, in sequence here, and each one is simpler than the last <3

(Then by using preference ballot averaging, the fact that you put something in 4th place over the 5th place option actually contributes in a small but potentially meaningful way to making the 4th thing more likely to come out on top, overall.)

A Worked Example Of CHOOSING A Full Preference Ballot

Maybe you think that there needs to be a lot of conversations still, about things that the FLI letter gets wrong.

Also, maybe you think the FLI letter is a too strong (1S?F)...

...but maybe you aren’t totally sure if Eliezer has some good points too (3F?E)?

Surely (you think to yourself) talking about either thing is better than moving forward very fast in a confused and haphazard way?

When has speed ever really mattered when it comes to policy decisions? Probably we can wait.

That amount of reasoning is enough to get you down to these options for your preference ballot (the full list of all 120 options is at the bottom of the essay):

37: 1S?F > 3F?E > 0StQ > 2FLI > 4EY|
38: 1S?F > 3F?E > 0StQ > 4EY| > 2FLI
39: 1S?F > 3F?E > 2FLI > 0StQ > 4EY|
40: 1S?F > 3F?E > 2FLI > 4EY| > 0StQ
41: 1S?F > 3F?E > 4EY| > 0StQ > 2FLI
42: 1S?F > 3F?E > 4EY| > 2FLI > 0StQ

Here’s what some of the remaining options “might mean”:

37: … 0StQ > 2FLI > 4EY| Suggests that laissez-faire will work. Only talking please (that at least might help more than hurt). But if not that, then “government governs best which governs least”.

38: … 0StQ > 4EY| > 2FLI Suggests that maybe AGI in general is all a nothing burger in terms of dangers, but talking about it is fun. If others veto talking or waiting, 38 says that we shouldn’t do some muddled bullshit, and since the only justification for otherwise stupid regulation would be threats to life itself let’s do Eliezer’s thing over FLI’s.

39: … 2FLI > 0StQ > 4EY Suggests that we should keep talking, but if everyone else thinks the time for talking is over then FLI has the best option, and Eliezer’s is worse than nothing.

40: … 2FLI > 4EY| > 0StQ Suggests that we should keep talking, but if everyone else thinks the time for talking is over then FLI has the best option, and Eliezer’s is better than nothing.

41: … 4EY| > 0StQ > 2FLI Suggests that we should keep talking, but if everyone else thinks it is time to act instead of talk then Eliezer’s option is the only thing that could save us, and if we don’t get that then FLI’s won’t work, but might pacify people into a false sense of safety, and then we’ll die in our sleep, which would be bad. Better to be awake at the end, knowing we are beyond the reach of God or sane government.

42: … 4EY| > 2FLI > 0StQ Suggests that if the time for talking is over then we should do the thing on the table that is closest to an adequate response to prevent literally everyone from literally dying. (You ask, why then is my top preference to talk mostly about FLI vs Status Quo?? Well… maybe that conversation will have some sort of semantic boomerang effect and cause people to update all the way over to Eliezer’s position. It could happen!)

I just picked 37-42 because it seemed like the least controversial possible set of things to use to illustrate what they mean.

These options are mostly in favor of talking, and for talking about the white bread option at that :-)

Option 42 in particular seems a big confused (to me at least) because of that “boomerang thing” where you want to talk, but not about your favorite policy choice if talking is done?!?

There are more just below so that if you have an idea of what you think the right total ORDERING over these outcomes should be, you can look up the NUMBER and then find that number among the Answers and upvote it :-)

How To EXPRESS A Single Preference Ballot Option

There are 120 different Answers to this LW Question.

I will use moderator powers to delete any ANSWERS by anyone else. (If someone wants to try something like this voting process again, with better options perhaps, then the thing to do is to generate a suggestion, and then put that suggestion into a preference ballot scheme. Each suggestion like this, however, makes the total number of ballot go up combinatorially. This combinatorial explosion is probably part of why getting groups on the same page is often so hard.)

Feel free to go crazy in the COMMENTS.

Figure out how you want to vote (the best, the 2nd best, and so on) and then look up the number, and then scroll down and upvote only and exactly the one best answer that you like among all 120 of the possible answers.

Given the currently existing tooling of LessWrong, I cannot keep you from voting for more than one answer, of course, because that’s not something built into the LessWrong system software, but I like people around here and have moderately high trust in our ability to coordinate to have nice things <3

Here Are The Answers To Vote On

0StQ: Status Quo (table)
1S?F: Talk more to find something less crazy than FLI
2FLI: Sign FLI and do it (implement details, don’t re-plan)
3F?E: Talk more to find the middle between FLI and Eliezer
4EY|: Do Eliezer’s proposal (implement details, don’t re-plan)

01: 0StQ > 1S?F > 2FLI > 3F?E > 4EY|
02: 0StQ > 1S?F > 2FLI > 4EY| > 3F?E
03: 0StQ > 1S?F > 3F?E > 2FLI > 4EY|
04: 0StQ > 1S?F > 3F?E > 4EY| > 2FLI
05: 0StQ > 1S?F > 4EY| > 2FLI > 3F?E
06: 0StQ > 1S?F > 4EY| > 3F?E > 2FLI
07: 0StQ > 2FLI > 1S?F > 3F?E > 4EY|
08: 0StQ > 2FLI > 1S?F > 4EY| > 3F?E
09: 0StQ > 2FLI > 3F?E > 1S?F > 4EY|
10: 0StQ > 2FLI > 3F?E > 4EY| > 1S?F
11: 0StQ > 2FLI > 4EY| > 1S?F > 3F?E
12: 0StQ > 2FLI > 4EY| > 3F?E > 1S?F
13: 0StQ > 3F?E > 1S?F > 2FLI > 4EY|
14: 0StQ > 3F?E > 1S?F > 4EY| > 2FLI
15: 0StQ > 3F?E > 2FLI > 1S?F > 4EY|
16: 0StQ > 3F?E > 2FLI > 4EY| > 1S?F
17: 0StQ > 3F?E > 4EY| > 1S?F > 2FLI
18: 0StQ > 3F?E > 4EY| > 2FLI > 1S?F
19: 0StQ > 4EY| > 1S?F > 2FLI > 3F?E
20: 0StQ > 4EY| > 1S?F > 3F?E > 2FLI
21: 0StQ > 4EY| > 2FLI > 1S?F > 3F?E
22: 0StQ > 4EY| > 2FLI > 3F?E > 1S?F
23: 0StQ > 4EY| > 3F?E > 1S?F > 2FLI
24: 0StQ > 4EY| > 3F?E > 2FLI > 1S?F

0StQ: Status Quo (table)
1S?F: Talk more to find something less crazy than FLI
2FLI: Sign FLI and do it (implement details, don’t re-plan)
3F?E: Talk more to find the middle between FLI and Eliezer
4EY|: Do Eliezer’s proposal (implement details, don’t re-plan)

25: 1S?F > 0StQ > 2FLI > 3F?E > 4EY|
26: 1S?F > 0StQ > 2FLI > 4EY| > 3F?E
27: 1S?F > 0StQ > 3F?E > 2FLI > 4EY|
28: 1S?F > 0StQ > 3F?E > 4EY| > 2FLI
29: 1S?F > 0StQ > 4EY| > 2FLI > 3F?E
30: 1S?F > 0StQ > 4EY| > 3F?E > 2FLI
31: 1S?F > 2FLI > 0StQ > 3F?E > 4EY|
32: 1S?F > 2FLI > 0StQ > 4EY| > 3F?E
33: 1S?F > 2FLI > 3F?E > 0StQ > 4EY|
34: 1S?F > 2FLI > 3F?E > 4EY| > 0StQ
35: 1S?F > 2FLI > 4EY| > 0StQ > 3F?E
36: 1S?F > 2FLI > 4EY| > 3F?E > 0StQ
37: 1S?F > 3F?E > 0StQ > 2FLI > 4EY|
38: 1S?F > 3F?E > 0StQ > 4EY| > 2FLI
39: 1S?F > 3F?E > 2FLI > 0StQ > 4EY|
40: 1S?F > 3F?E > 2FLI > 4EY| > 0StQ
41: 1S?F > 3F?E > 4EY| > 0StQ > 2FLI
42: 1S?F > 3F?E > 4EY| > 2FLI > 0StQ
43: 1S?F > 4EY| > 0StQ > 2FLI > 3F?E
44: 1S?F > 4EY| > 0StQ > 3F?E > 2FLI
45: 1S?F > 4EY| > 2FLI > 0StQ > 3F?E
46: 1S?F > 4EY| > 2FLI > 3F?E > 0StQ
47: 1S?F > 4EY| > 3F?E > 0StQ > 2FLI
48: 1S?F > 4EY| > 3F?E > 2FLI > 0StQ

0StQ: Status Quo (table)
1S?F: Talk more to find something less crazy than FLI
2FLI: Sign FLI and do it (implement details, don’t re-plan)
3F?E: Talk more to find the middle between FLI and Eliezer
4EY|: Do Eliezer’s proposal (implement details, don’t re-plan)

49: 2FLI > 0StQ > 1S?F > 3F?E > 4EY|
50: 2FLI > 0StQ > 1S?F > 4EY| > 3F?E
51: 2FLI > 0StQ > 3F?E > 1S?F > 4EY|
52: 2FLI > 0StQ > 3F?E > 4EY| > 1S?F
53: 2FLI > 0StQ > 4EY| > 1S?F > 3F?E
54: 2FLI > 0StQ > 4EY| > 3F?E > 1S?F
55: 2FLI > 1S?F > 0StQ > 3F?E > 4EY|
56: 2FLI > 1S?F > 0StQ > 4EY| > 3F?E
57: 2FLI > 1S?F > 3F?E > 0StQ > 4EY|
58: 2FLI > 1S?F > 3F?E > 4EY| > 0StQ
59: 2FLI > 1S?F > 4EY| > 0StQ > 3F?E
60: 2FLI > 1S?F > 4EY| > 3F?E > 0StQ
61: 2FLI > 3F?E > 0StQ > 1S?F > 4EY|
62: 2FLI > 3F?E > 0StQ > 4EY| > 1S?F
63: 2FLI > 3F?E > 1S?F > 0StQ > 4EY|
64: 2FLI > 3F?E > 1S?F > 4EY| > 0StQ
65: 2FLI > 3F?E > 4EY| > 0StQ > 1S?F
66: 2FLI > 3F?E > 4EY| > 1S?F > 0StQ
67: 2FLI > 4EY| > 0StQ > 1S?F > 3F?E
68: 2FLI > 4EY| > 0StQ > 3F?E > 1S?F
69: 2FLI > 4EY| > 1S?F > 0StQ > 3F?E
70: 2FLI > 4EY| > 1S?F > 3F?E > 0StQ
71: 2FLI > 4EY| > 3F?E > 0StQ > 1S?F
72: 2FLI > 4EY| > 3F?E > 1S?F > 0StQ

0StQ: Status Quo (table)
1S?F: Talk more to find something less crazy than FLI
2FLI: Sign FLI and do it (implement details, don’t re-plan)
3F?E: Talk more to find the middle between FLI and Eliezer
4EY|: Do Eliezer’s proposal (implement details, don’t re-plan)

73: 3F?E > 0StQ > 1S?F > 2FLI > 4EY|
74: 3F?E > 0StQ > 1S?F > 4EY| > 2FLI
75: 3F?E > 0StQ > 2FLI > 1S?F > 4EY|
76: 3F?E > 0StQ > 2FLI > 4EY| > 1S?F
77: 3F?E > 0StQ > 4EY| > 1S?F > 2FLI
78: 3F?E > 0StQ > 4EY| > 2FLI > 1S?F
79: 3F?E > 1S?F > 0StQ > 2FLI > 4EY|
80: 3F?E > 1S?F > 0StQ > 4EY| > 2FLI
81: 3F?E > 1S?F > 2FLI > 0StQ > 4EY|
82: 3F?E > 1S?F > 2FLI > 4EY| > 0StQ
83: 3F?E > 1S?F > 4EY| > 0StQ > 2FLI
84: 3F?E > 1S?F > 4EY| > 2FLI > 0StQ
85: 3F?E > 2FLI > 0StQ > 1S?F > 4EY|
86: 3F?E > 2FLI > 0StQ > 4EY| > 1S?F
87: 3F?E > 2FLI > 1S?F > 0StQ > 4EY|
88: 3F?E > 2FLI > 1S?F > 4EY| > 0StQ
89: 3F?E > 2FLI > 4EY| > 0StQ > 1S?F
90: 3F?E > 2FLI > 4EY| > 1S?F > 0StQ
91: 3F?E > 4EY| > 0StQ > 1S?F > 2FLI
92: 3F?E > 4EY| > 0StQ > 2FLI > 1S?F
93: 3F?E > 4EY| > 1S?F > 0StQ > 2FLI
94: 3F?E > 4EY| > 1S?F > 2FLI > 0StQ
95: 3F?E > 4EY| > 2FLI > 0StQ > 1S?F
96: 3F?E > 4EY| > 2FLI > 1S?F > 0StQ

0StQ: Status Quo (table)
1S?F: Talk more to find something less crazy than FLI
2FLI: Sign FLI and do it (implement details, don’t re-plan)
3F?E: Talk more to find the middle between FLI and Eliezer
4EY|: Do Eliezer’s proposal (implement details, don’t re-plan)

97: 4EY| > 0StQ > 1S?F > 2FLI > 3F?E
98: 4EY| > 0StQ > 1S?F > 3F?E > 2FLI
99: 4EY| > 0StQ > 2FLI > 1S?F > 3F?E
100: 4EY| > 0StQ > 2FLI > 3F?E > 1S?F
101: 4EY| > 0StQ > 3F?E > 1S?F > 2FLI
102: 4EY| > 0StQ > 3F?E > 2FLI > 1S?F
103: 4EY| > 1S?F > 0StQ > 2FLI > 3F?E
104: 4EY| > 1S?F > 0StQ > 3F?E > 2FLI
105: 4EY| > 1S?F > 2FLI > 0StQ > 3F?E
106: 4EY| > 1S?F > 2FLI > 3F?E > 0StQ
107: 4EY| > 1S?F > 3F?E > 0StQ > 2FLI
108: 4EY| > 1S?F > 3F?E > 2FLI > 0StQ
109: 4EY| > 2FLI > 0StQ > 1S?F > 3F?E
110: 4EY| > 2FLI > 0StQ > 3F?E > 1S?F
111: 4EY| > 2FLI > 1S?F > 0StQ > 3F?E
112: 4EY| > 2FLI > 1S?F > 3F?E > 0StQ
113: 4EY| > 2FLI > 3F?E > 0StQ > 1S?F
114: 4EY| > 2FLI > 3F?E > 1S?F > 0StQ
115: 4EY| > 3F?E > 0StQ > 1S?F > 2FLI
116: 4EY| > 3F?E > 0StQ > 2FLI > 1S?F
117: 4EY| > 3F?E > 1S?F > 0StQ > 2FLI
118: 4EY| > 3F?E > 1S?F > 2FLI > 0StQ
119: 4EY| > 3F?E > 2FLI > 0StQ > 1S?F
120: 4EY| > 3F?E > 2FLI > 1S?F > 0StQ

0StQ: Status Quo (table)
1S?F: Talk more to find something less crazy than FLI
2FLI: Sign FLI and do it (implement details, don’t re-plan)
3F?E: Talk more to find the middle between FLI and Eliezer
4EY|: Do Eliezer’s proposal (implement details, don’t re-plan)

Just find your favorite answer among the 120 (just below) and upvote it.

I will probably process the votes with some of Debians’s old voting software at some point, unless someone beats me to the punch. That will be a followup post.