LessWrong Team
Ruby
LW Australia Weekend Retreat
Meetup : LW Australia Mega-Meetup
I know it’s a long way, but if you’re eager for LW company it’d be super great to have you guys at our LW Australia Mega-Meetup weekend retreat next month. We’ve already got one person from Auckland considering it. :)
Either way, best of luck growing your communities!
And we’re in action!
http://lesswrong.com/lw/k23/meetup_lw_australia_megameetup/
Meetup : Melbourne June Rationality Dojo: Memory
Australian Mega-Meetup 2014 Retrospective
I have updated towards your position.
The whole thing hinges on how much you trust people when they assure you you can say potentially upsetting thing X to them. Generally, not very much. I would never trust a sticker or declaration to the extent that I wouldn’t model someone’s response, it’s just an update on that model.
It was emphasised that people didn’t have to answer any question, but the empathy should have been equally pushed.
On this occasion, askers were very hesitant to ask questions they thought would be too personal, but those being asked invariably responded without any hesitation or unease. Discovering that you could ask personal questions you were curious about with only the positive consequences of closeness and openness was a win.
But this does all include a good deal of judgment. Not an exercise for a group not high in empathy or generally unconcerned about others’ responses, nor for those who are easily pressured.
Meetup : July Rationality Dojo: Disagreement
If ever you want to refer to an elaboration and justification of this position, see R. M. Hare’s two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).
To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.
So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word ‘implant’; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.
How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they’d take. “Always be kind” is also a rule. For clarity, I’d substitute the word ‘algorithm’ for ‘rules’/‘principles’. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best—be it inviolable deontological rules, character-based virtue ethics, or something else.
Level-1 is about rules which your habit and instinct can follow, but I wouldn’t say they’re ways to describe it. Here we’re talking about normative rules, not descriptive System 1/System 2 stuff.
I feel like there’s not much of a distinction being made here between terminal values and terminal goals. I think they’re importantly different things.
A goal I set is a state of the world I am actively trying to bring about, whereas a value is something which . . . has value to me. The things I value dictate which world states I prefer, but for either lack of resources or conflict, I only pursue the world states resulting from a subset of my values.
So not everything I value ends up being a goal. This includes terminal goals. For instance, I think that it is true that I terminally value being a talented artist—greatly skilled in creative expression—being so would make me happy in and of itself, but it’s not a goal of mine because I can’t prioritise it with the resources I have. Values like eliminating suffering and misery are ones which matter to me more, and get translated into corresponding goals to change the world via action.
I haven’t seen a definition provided, but if I had to provide one for ‘terminal goal’ it would be that it’s a goal whose attainment constitutes fulfilment of a terminal value. Possessing money is rarely a terminal value, and so accruing money isn’t a terminal goal, even if it is intermediary to achieving a world state desired for its own sake. Accomplishing the goal of having all the hungry people fed is the world state which lines up with the value of no suffering, hence it’s terminal. They’re close, but not quite same thing.
I think it makes sense to possibly not work with terminal goals on a motivational/decision making level, but it doesn’t seem possible (or at least likely) that someone wouldn’t have terminal values, in the sense of not having states of the world which they prefer over others. [These world-state-preferences might not be completely stable or consistent, but if you prefer the world be one way than another, that’s a value.]
Motivators: Altruistic Actions for Non-Altruistic Reasons
Thanks! Fixed.
You are very kind, good sir.
Do me one more favour—share a thought you have in response to something I wrote. There is much to still be said, but there has been no discussion.
Hey everyone,
This a new account for an old user. I’ve got a couple of substantial posts waiting in the wings and wanted to move to an account with different username from the one I first signed up with years ago. (Giving up on a mere 62 karma).
I’m planning a lengthy review of self-deception used for instrumental ends and a look into motivators vs. reason, by which I mean something like social approval is a motivator for donating, but helping people is the reason.
Those, and I need to post about a Less Wrong Australia Mega-Meetup which has been planned.
So pretty please, could I get the couple of karma points needed to post again?