I agree with this distinction. Thank you for pointing it out. One seems more immediate, the other more longer term.
What do you think about the magnitude of the effect?
New guns, e.g. those newly produced or sold.
Don’t trust any numbers Scott Adams gives. They are just directional. And they include self-perception. So someone who is actually 95th percentile may *feel* like he is just 75th.
Also he talks a lot about creating a stack of multiple skills. And stack doesn’t mean just having the skills but combining them in a productive way. Like robertskmiles: Being a YouTuber and being interested in AI Safety doesn’t automatically make you an AI Safety YouTuber. You have to do some actual work for that. And it doesn’t hurt to e.g. know enough economics to do A/B tests.
Pedantry like this _is_ a way to assert a little bit of independence/disagreement (or, less justifiably, dominance), and to open the concept of disagreement in a way that’s deniable, and start a subtle, unacknowledged negotiation which can be de-escalated easily if either party decides it’s not worth pursuing.
Great point! That is, provided you make this a conscious choice. But if you are not making it consciously. If you are just following a habit of nitpicking (for whatever deeper psychological reasons) then de-escalation will be harder because you don’t know where the conflict comes from.
Ozy, in their sequence on Dialectical Behavioral Therapy
I can’t find it here on LW. Can you point me to it?
Radical Acceptance says, “It’s okay to screw up. …”
I recently attended a meditation retreat organized by the Berlin LW group. Buddhist meditation is a lot about seeing yourself and your needs and actions as it is. Seeing pain as pain. Seeing feelings as feelings and distractions as distractions. In a way the thoroughness of this could be called radical. But it goes beyond acceptance. Acceptance relates to or alters your identity. But Buddhism goes farther: There is nothing to accept. Which part of you is doing the accepting?
Related to https://wiki.lesswrong.com/wiki/Litany_of_Gendlin
I would have liked some links to definitions of terms used as they come along, e.g. the colors and meditative levels (the former I could google the latter less so).
The general patterns is
Systems in general work poorly or not at all.
Which also has lots of examples but needs to be taken not too serious.
Well, can’t disagree with such an abstract approach. Must be true somewhere.
But I do. The world must look like that if you run a fast strategy. From here where I am with a slow strategy in the upper middle of the range where it looks mostly flat and the ends far away and the strategy is mostly to keep it that way.
As usual Scott Alexander explains it much better:
I tend to agree with this view. I think that is also one of the aspects implied (sic) by the implicit and explicit communication post: The value of maintaining a highly cohesive and committed team may be a higher value (for a military force) than the risk of loss of life—because in a real war many more lives will be lost (at least that is the reasoning of the military I guess).
I don’t think fortnightly will work. That’s why I left that out. Adding a tags rule without tags makes no sense either.
Ask a lesswronger.
That’s a bit difficult if there is no place to ask. I like the posts on LW 2.0 but I miss the open discussions.
I think MIT’s new AlterEgo headset still falls into the category “Devices and Gadgets” of When does technological enhancement feel natural and acceptable? But it’s still a pretty nice step forward.
The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin.
“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
An interesting though somewhat bizarre prediction on the difficulty of building AI by Scott Adams in a recent Periscope session of him (paraphrased from memory):
“The perception that building human intelligence seems so difficult results from a perceptual distortion. Namely that human intelligence is something great when in fact we humans do not possess superior rationality. We only think we do. We just bounce around randomly and try to explain that as something awesome after the fact. Building artificial intelligence then is hard because we try to build something that doesn’t exist. On the other hand building e.g. a robot that moves around arbitrarily based on some complex inner mechanism and generates explanations why it does so would be easy and appear very intelligent.”
The thing is that this is a testable approach and prediction. I want to document it here partly because he claims that he has said that for some years now.
Does something like Open Threads exist in LW 2.0? When I create one how would anybody get to know about it?
The only reason I persisted was because I was interested in the cryptography aspect and wanted to be a part of an up-and-coming technology.
And that is a reward I guess a very high fraction of the people actually ‘investing’ in Bitcoin had. Those hackers, nerds, tech enthusists didn’t need high fractions of likelihood times payoff.
And maybe the true lesson to draw from this is not to look at an abstract payoff but at the social dynamic: Are there enough people attracted to something.
I think this goes beyond math and is really a general pattern about learning by system 1 and 2 interacting. It’s just more clearly visible with math because it is neccessarily more precise. I once described it here (before knowing about system 1 and 2 terminology): http://wiki.c2.com/?FuzzyAndSymbolicLearning
That sounds very close to the meta rule of being only allowed to change the rule to more precise rule. So you have the rule of not eating cookies and come across a very special cookie. Making an exception opens the door to arbitrary exceptions. But what about changing the rule to allow only cookies that you have never eaten before? That is clearly a rule that allows this special cookie and also future special cookies, satisfies the culinary curiosity without noticaby impacting the calories.
Reality has a surprising amount of detail (there was a post about this explaining it at the example of contructing a simple wooden ladder which I can’t find, but I bet there are a lot comparable descriptions out there). Or take a candle. I guess you have used one recently. Looks pretty simple, right? Just use some wax and a wick. Turns out that people have used candles since ages. They were frequently used in rome for example. But the easy to use candles of our time are pretty recent. Recent as in last century. Before that
they didn’t have wicks that burned themselves away and you had to cut them all the time
there was no good wax. Most candles were made of fat with lots of residue that stank and smoked. Bees wax was much better but harder to get
To fix these things you need much better raw materials and production processes...
See this article about candle history (German, but I guess Google translate is good enough).
And you can look at any kind of thing we take for granted and it is basically not posible to grasp all of it. The classical example is I, Pencil: My Family Tree as told to Leonard E. Read Most things depend on the presence of a whole environment—and take part in bringing it about. You could see it as a co-evolution of lots of inventions. Something just hinted at in the comment about roads needed for wheels (and actually you benefit from having wheels when building roads...).
I think this is one of the main overlooked points when talking about the possibility of space travel, esp. interstellar one. Even if you assume AIs. But let’s not. As mentioned in another comment we not really know what kind of coordination problems it comes with. Scaling isn’t automatic. Just look at Moore’s law. Sure we continue to scale, but we pile technology on technology on technology to do so, And we can’t just invent the last one. And neither can a future AI. You need the whole stack (OK granted, you might be able to simplify, but still). And it will keep growing and might become inherently unmanageable. Remember: The price of a chip factories also continues to grow and that might be the limiting factor. See e.g. McKinsey on Semiconductors 2013.
Added: I wonder whether this is a kind of niche need of our kind of folks. Or maybe it is the other way around and I project, because I had also trouble of understanding other people especially people my age. On the other had I could always relate well to older (adults when I was young) and younger, esp. children.