If you like analytic philosophy and mechanism design, consider checking out my blog.
B Jacobs
I understood that they cut off tissue for research, unless you know one where they don’t. I also couldn’t find a source for how long they preserve brains. But if there is one that keeps your brain intact (As much intact as an oxygen-deprived transported brain can be) and they preserve it for a long time, then that does sound like a reasonable option for people living within donating distance to it.
Very sad. I’m not saying people have the strength to do these things, I’m just saying they are (from a utilitarian perspective) irrational.
I think the marginal version is indeed a good way of dissecting arguments (and I thought I did use that version)
The counterfactual version is a bit more icky. I’m not saying it can never be used, but if we take this example I feel like if “I” always had a brain that ran smoothly even though it was 50 degrees higher that wouldn’t really be “me”.
Maybe it’s just a failure of imagination on my part, but in most cases I feel like I’m supposed to speak for a creature that I can’t really speak for.
Meta-Preference Utilitarianism
Imagine a universe full of Robin Hanson lookalikes (all total utilitarians) that desperately want to kickstart the age of em (the repugnant conclusion). The dictator of this universe is a median utilitarian that uses black magic and nano-bots to euthanize all depressed people and sabotage any progress towards the age of em. Do you think that in this case the dictator should ideally change his behavior as to maximize the meta-preferences of his citizens?
We are talking about a hypothetical vote here, where we could glean people’s underlying preferences. Not what people think they want (people get that wrong all the time) but their actual utility function. This leaves us with three options:
1) You do not actually care about how we aggregate utility, this would result in an ambivalence score of 0
2) You do have an underlying preference that you just don’t know consciously, this means your underlying preference gets counted.
3) You do care about how we aggregate utility, but aren’t inherently in favor of either average or total. So when we gauge your ambivalence we see that you do care (1 or something high), but you really like both average (e.g 0,9) and total (e.g 0,9) with other methods like median and mode getting something low (like e.g 0,1)
In all cases the system works to accommodate your underlying preferences.
[Edit: the following example is bad. I might rewrite my thoughts about meta-preferentialism in the future, in which case I will write a better example and link to it here]
I did answer that question (albeit indirectly) but let me make it explicit.
Because of score voting the issue between total and average-aggregating is indeed dissolved (even with a fixed population)
Now I will note that in the case of the second problem score voting will also solve this the vast majority of the time, but let’s look at a (very) rare case where it would actually be a tie:
Alice and Bob want: Total (0,25), Average (1), Median (0)
Cindy and Dan want: Total (0,25), Average (0), Median (1)
And Elizabeth wants: Total (1), Average (0), Median (0)
So the final score is: Total (2), Average (2), Median (2)
(Note that for convenience I assume that this is with the ambivalence factor already calculated in)
In this case only one person is completely in favor of total with the others being lukewarm to it, but with a very strong split among the average-median question (Yes this is a very bizarre scenario)
Now numerically these all have the same preference, so the next question becomes: what do we pursue? This could be solved with a score vote too: How strong is your preference for:
(1) Picking one strategy at random (2) Pursuing all strategies 33% of the time (3) Picking the method that the least amount of people gave a zero (4) Only pursuing the methods that more than one person gave a 1 proportionally …etc, etc...
But what if, due to some unbelievable cosmic coincidence, that next vote also ends in a tie?
Well you go up one more level until either the ambivalence takes over (I doubt I would care after 5 levels of meta) or until there is a tie-breaker. Although it is technically possible to have a tie in an infinite amount of meta-levels, in reality this will never happen.
- 5 Feb 2020 18:13 UTC; 1 point) 's comment on Meta-Preference Utilitarianism by (
And yes you go as many levels of meta as needed to solve the problem. I only call it ‘meta-preference utilitarianism’ because ‘gauging-a-potentially-infinite-amount-of-meta-preferences utilitarianism’ isn’t quite as catchy.
Thank you very much for this comment, it explained my thoughts better than I could have ever written.
Yes, I think moral realism is false and didn’t realize that was not a mainstream position in the EA community. I had trouble accepting it myself for the longest time and I was incredibly frustrated that all evidence seemed to point away from moral realism. Eventually I realized that freedom could only exist in the arbitrary and that a clockwork moral code would mean a clockwork life.
I’m only a first-year student so I’ll be very interested in seeing what a professional (like yourself) could extrapolate from this idea. The rough draft you showed me is already very promising and I hope you get around to eventually making a post about it.
I mentioned the utilitarian voting method, also known as score voting. This is the most accurate way to gauge peoples preferences (especially if the amount of nuance is unbounded e.g 0,827938222...) if you don’t have to deal with people voting strategically (which would be the case if we were just checking people’s utility function)
EDIT: Or maybe not? I’m not an expert on social choice theory, but I’m not entirely confident that Bayesian regret is the best metric anymore. So if a social choice theorist thinks I made a mistake, please let me know.
You’re right that the system of ‘do what you want’ is an all-encompassing system. But it also leaves a lot of things underspecified (basically everything), which was (in my opinion) the more important insight.
This is talking about the underlying preferences, not the surface level preferences. It’s an abstract moral system where we try to optimize people’s utility function, not a concrete political one where we ask people what they want.
Making a Crowdaction platform
Yeah I’m worried about the politics too. I’m afraid that if the site gets used by one political ideology the whole site will get branded as a pro-that-ideology website. We could use CollAction, but even the people here may find it ‘icky’ to use such a ‘green-party’ site. Something something mindkiller...
But as to your first concern about people wanting to coordinate a move to a different equilibria but not finding people who also want to move to that particular equilibria; I think that’s why you need voting in there. Voting is a powerful tool to make people come to an agreement quickly and STAR-voting is great because it’s extremely hard to vote strategically. Without the voting element you’ll just have a bunch of people agreeing that the current situation is really bad, but not agreeing where they should move to.
The jump is just a metaphor, some switches really are costly, but even if it isn’t people aren’t going to motivated to do an un-costly action if they think it’ll be useless waste of time anyway.
Well the basic stuff: The site allows you to upload your own projects (with their approval) and allows you to join other projects. The projects have clear deadlines and goals and once they are reached they are closed forever. The site is pretty. But I didn’t mention that you can also see how many people have already joined in (I’ll edit that last part in).
In the first example they are talking about smoking cannabis (illegal), in the second they are talking about smoking (not inherently illegal since you can smoke tabacco). I’ll make an edit to make it clearer.
That would work for people that already know each other, but the whole point is that you can coordinate huge swaths of strangers. I think everyone faces these kinds of coordination problems but simply don’t have the mental or literal tools to recognize/solve them. I don’t think if we build this site people would immediately flock to it to solve the big problems like the ones in Meditations on Moloch and Inadequate Equilibria, but administration uncluttering and political action are already viable with Collaction, so simply improving upon that could be enough to get people interested.
Plus it might go quicker than you think thanks to social-signalling. If people use this site to signal that they would be totally willing to do such and such noble cause it could quickly spread the word around. Because that’s mainly what a site like this needs, lots of users → the more there are the better it works.
In my post on Crowdaction I laid out a Crowdaction site which also uses money. In retrospect I should’ve talked about DAC in my original post, but I didn’t because 1: I failed to predict that someone would make a reply to my post that states we should replace Crowdaction with DAC, and 2: I didn’t think I should be rehashing too much known information and instead wanted to explore my own new ideas.
This post is just there to 1: point out that this idea of Eapache already exist and to point to the existing literature on it, and 2: To say that we can’t replace DAC with crowdaction because they play to different markets. I didn’t want to repeat everything I already said about Crowdaction in my post from three days ago which is why this post is significantly shorter than my original post.
A Problem With Patternism
That’s a very ironic statement for someone named “Pattern”
Fixed