LessWrong Team
Ruby
I know it’s a long way, but if you’re eager for LW company it’d be super great to have you guys at our LW Australia Mega-Meetup weekend retreat next month. We’ve already got one person from Auckland considering it. :)
Either way, best of luck growing your communities!
And we’re in action!
http://lesswrong.com/lw/k23/meetup_lw_australia_megameetup/
I have updated towards your position.
The whole thing hinges on how much you trust people when they assure you you can say potentially upsetting thing X to them. Generally, not very much. I would never trust a sticker or declaration to the extent that I wouldn’t model someone’s response, it’s just an update on that model.
It was emphasised that people didn’t have to answer any question, but the empathy should have been equally pushed.
On this occasion, askers were very hesitant to ask questions they thought would be too personal, but those being asked invariably responded without any hesitation or unease. Discovering that you could ask personal questions you were curious about with only the positive consequences of closeness and openness was a win.
But this does all include a good deal of judgment. Not an exercise for a group not high in empathy or generally unconcerned about others’ responses, nor for those who are easily pressured.
If ever you want to refer to an elaboration and justification of this position, see R. M. Hare’s two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).
To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.
So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word ‘implant’; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.
How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they’d take. “Always be kind” is also a rule. For clarity, I’d substitute the word ‘algorithm’ for ‘rules’/‘principles’. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best—be it inviolable deontological rules, character-based virtue ethics, or something else.
Level-1 is about rules which your habit and instinct can follow, but I wouldn’t say they’re ways to describe it. Here we’re talking about normative rules, not descriptive System 1/System 2 stuff.
I feel like there’s not much of a distinction being made here between terminal values and terminal goals. I think they’re importantly different things.
A goal I set is a state of the world I am actively trying to bring about, whereas a value is something which . . . has value to me. The things I value dictate which world states I prefer, but for either lack of resources or conflict, I only pursue the world states resulting from a subset of my values.
So not everything I value ends up being a goal. This includes terminal goals. For instance, I think that it is true that I terminally value being a talented artist—greatly skilled in creative expression—being so would make me happy in and of itself, but it’s not a goal of mine because I can’t prioritise it with the resources I have. Values like eliminating suffering and misery are ones which matter to me more, and get translated into corresponding goals to change the world via action.
I haven’t seen a definition provided, but if I had to provide one for ‘terminal goal’ it would be that it’s a goal whose attainment constitutes fulfilment of a terminal value. Possessing money is rarely a terminal value, and so accruing money isn’t a terminal goal, even if it is intermediary to achieving a world state desired for its own sake. Accomplishing the goal of having all the hungry people fed is the world state which lines up with the value of no suffering, hence it’s terminal. They’re close, but not quite same thing.
I think it makes sense to possibly not work with terminal goals on a motivational/decision making level, but it doesn’t seem possible (or at least likely) that someone wouldn’t have terminal values, in the sense of not having states of the world which they prefer over others. [These world-state-preferences might not be completely stable or consistent, but if you prefer the world be one way than another, that’s a value.]
Thanks! Fixed.
You are very kind, good sir.
Do me one more favour—share a thought you have in response to something I wrote. There is much to still be said, but there has been no discussion.
I’m on a bench near the Botanisk Have Butik. Entrance to the park is corner of Gothersgade and Øster Voldgade.
No page on meetup.com, I’m afraid.
I’m surprised by this idea of treating SSC as a rationalist hub. I love Scott, Scott’s blog, and Scott’s writing. Still, it doesn’t seem like it is a “rationality blog” to me. Not directly at least. Scott is applying a good deal of epistemic rationality to his topics of interest, but the blog isn’t about epistemic rationality, and even less so about practical rationality. (I would say that Brienne’s and Nate’s ‘self-help’ posts are much closer to that.) By paying attention, one might extract the rationality principles Scott is using, but they’re not outlined.
There’s a separate claim that while Scott’s blog isn’t about rationality in the same was LW is, it has attracted the same audience, and therefore can be a rationality attractor/hub. This has some legitimacy, but I still don’t like it. LW has attracted a lot of people who like to debate interesting topics and ideas on the internet, with a small fraction who are interested in going out and doing things (or just staying in, but actually changing themselves). Scott’s blog, being about ideas, seems that it also attract lots of people who simply like mental stimulation, but without a filter for those most interested in doing. I’d really like our rationality community hubs to select for those who want take rationality seriously and implement it in their minds and actions.
On this selecting for -or at least being about- the EA Forum is actually quite good.
Lastly, maybe I feel strong resistence to trying to open Scott’s blog up because it seems like it really is his personal blog about things he wants to write about—and just because he’s really successful and part of the community doesn’t mean we get tell him now ‘open it up’/‘give it over’/co-opt it for the rest of the community.
I feel that Nate Soares’s post Rest in Motion is relevant here, and, by extension, my own response to that post.
This is a great post. In addition to the main points, your example around Guess-/Ask-/Tell-Cultures was useful for perspective taking in a way that somehow feels like it generalizes beyond the specific example for me.
Good post! I’m excited for your milestone. I’m not sure if I can discern between my having enough experience with mindfulness and acceptance to get what you’re pointing at, or if I’m simply using my closest conceptual bucket, but I believe your experience is real (if not always your interpretation of it).
My sense is that “enlightenment” is a perceptual-emotional shift rather than any change of belief or judgment, and this makes the communication difficult, same as communicating any other qualia to a person who hasn’t had it. It’s not unlike trying to communicate what a hypothetical novel color looks like to someone who hasn’t seen it.
Of course, if I can see ultraviolet colors (due to some novel Crispr treatment or something), I can offer a good description of the mechanics which are producing my unique experience , i.e. “I can see a wavelength you can’t.” In the case of enlightenment, however, we don’t have commonly accepted and understood models like wavelength of light. If we did for qualia too, I think Val could communicate in an understandable what was going on his mind, even if the mechanical description cannot convey the actual experience. (I’m reminded of the Mary’s Room thought experiment.)
In the case of Val’s Kensho, I don’t think I’ve ever occupied that mental state, but I’ve experienced enough variations in relevant dimensions of perception, emotion, and relation to reality that I get that he’s gone in a certain direction in a certain coordinate system of sorts. I don’t occupy the same perceptual-mental state though through my understanding alone, but I feel like I could follow if I did the right things.
I think the advice to get used to using fake frames as leading towards this is on point since it’s close to the skill of shifting one’s perceptual-emotional state. Rationalists focus on having a map which matches the territory and are therefore constanty drawing in new lines and editing old ones; Val’s pointing at the skill of reconsidering the ontology of the representation. What if roads, houses, and trees weren’t the basic units of a map? This thought maneuver requires a pulling back from one’s “object level models”, and I see that pulling back generalizing to pulling back entirely from models and being able to see “raw perception-emotion”. At that level, there are mental transformations possible which aren’t about beliefs or judgments. You don’t shift to consider death less bad, but your relationship to it is changed, even if it still horrific.
“Okay” is such an underqualified word for what I think Val is trying to convey. At least if it’s the same thing I have a sense of.
I don’t think you need to approach meditation as a wager of vast resources for a gain obtained only at the end. My experience is that a modest amount of meditation, properly approached, has offered me substantial benefits. My recommendation is to spend a modest number of hours trying meditation out, and use the information obtained to judge whether or not it is worth further investment.
I have some detailed models of what meditation accomplishes and why, and I hope to write about them eventually. Till then, I’m happy to chat. I’d also recommend the Science of Enlightenment by Shinzen Young; definitely heavy on the grand promises, but he offers more models of what’s going on than most texts.
Hey everyone,
This a new account for an old user. I’ve got a couple of substantial posts waiting in the wings and wanted to move to an account with different username from the one I first signed up with years ago. (Giving up on a mere 62 karma).
I’m planning a lengthy review of self-deception used for instrumental ends and a look into motivators vs. reason, by which I mean something like social approval is a motivator for donating, but helping people is the reason.
Those, and I need to post about a Less Wrong Australia Mega-Meetup which has been planned.
So pretty please, could I get the couple of karma points needed to post again?