What would the use be of thinking slower? Maybe for boring times?
No, though that might be useful for things like long space travel too.
I’m more thinking about the ability to perceive and act on longer timescales effectively. What Robin Hanson calls the Long View. We are not very good at noticing and consciously dealing with processes that are much slower than our attention span. We have to piece these together from episodic memory.
Sorry for the laaaaate reply. Curious whether you are still here.
As promised: The question was originally posed by Scott Adams on Twitter.
Thank you for offering your thoughts on the question with such depth! While I agree with your sentiment your answer didn’t help answer the original question. But maybe the answer really is that the question is ill-posed.
(sorry for the late reply—I was switching working machines...)
I agree with this distinction. Thank you for pointing it out. One seems more immediate, the other more longer term.
What do you think about the magnitude of the effect?
New guns, e.g. those newly produced or sold.
Don’t trust any numbers Scott Adams gives. They are just directional. And they include self-perception. So someone who is actually 95th percentile may *feel* like he is just 75th.
Also he talks a lot about creating a stack of multiple skills. And stack doesn’t mean just having the skills but combining them in a productive way. Like robertskmiles: Being a YouTuber and being interested in AI Safety doesn’t automatically make you an AI Safety YouTuber. You have to do some actual work for that. And it doesn’t hurt to e.g. know enough economics to do A/B tests.
Pedantry like this _is_ a way to assert a little bit of independence/disagreement (or, less justifiably, dominance), and to open the concept of disagreement in a way that’s deniable, and start a subtle, unacknowledged negotiation which can be de-escalated easily if either party decides it’s not worth pursuing.
Great point! That is, provided you make this a conscious choice. But if you are not making it consciously. If you are just following a habit of nitpicking (for whatever deeper psychological reasons) then de-escalation will be harder because you don’t know where the conflict comes from.
Ozy, in their sequence on Dialectical Behavioral Therapy
I can’t find it here on LW. Can you point me to it?
Radical Acceptance says, “It’s okay to screw up. …”
I recently attended a meditation retreat organized by the Berlin LW group. Buddhist meditation is a lot about seeing yourself and your needs and actions as it is. Seeing pain as pain. Seeing feelings as feelings and distractions as distractions. In a way the thoroughness of this could be called radical. But it goes beyond acceptance. Acceptance relates to or alters your identity. But Buddhism goes farther: There is nothing to accept. Which part of you is doing the accepting?
Related to https://wiki.lesswrong.com/wiki/Litany_of_Gendlin
I would have liked some links to definitions of terms used as they come along, e.g. the colors and meditative levels (the former I could google the latter less so).
The general patterns is
Systems in general work poorly or not at all.
Which also has lots of examples but needs to be taken not too serious.
Well, can’t disagree with such an abstract approach. Must be true somewhere.
But I do. The world must look like that if you run a fast strategy. From here where I am with a slow strategy in the upper middle of the range where it looks mostly flat and the ends far away and the strategy is mostly to keep it that way.
As usual Scott Alexander explains it much better:
I tend to agree with this view. I think that is also one of the aspects implied (sic) by the implicit and explicit communication post: The value of maintaining a highly cohesive and committed team may be a higher value (for a military force) than the risk of loss of life—because in a real war many more lives will be lost (at least that is the reasoning of the military I guess).
I don’t think fortnightly will work. That’s why I left that out. Adding a tags rule without tags makes no sense either.
Ask a lesswronger.
That’s a bit difficult if there is no place to ask. I like the posts on LW 2.0 but I miss the open discussions.
I think MIT’s new AlterEgo headset still falls into the category “Devices and Gadgets” of When does technological enhancement feel natural and acceptable? But it’s still a pretty nice step forward.
The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin.
“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
An interesting though somewhat bizarre prediction on the difficulty of building AI by Scott Adams in a recent Periscope session of him (paraphrased from memory):
“The perception that building human intelligence seems so difficult results from a perceptual distortion. Namely that human intelligence is something great when in fact we humans do not possess superior rationality. We only think we do. We just bounce around randomly and try to explain that as something awesome after the fact. Building artificial intelligence then is hard because we try to build something that doesn’t exist. On the other hand building e.g. a robot that moves around arbitrarily based on some complex inner mechanism and generates explanations why it does so would be easy and appear very intelligent.”
The thing is that this is a testable approach and prediction. I want to document it here partly because he claims that he has said that for some years now.
Does something like Open Threads exist in LW 2.0? When I create one how would anybody get to know about it?