One thing I do with such questions is go meta, i.e. add some uncertainty for P(the tool actually has the right answer)when giving my responses.
tryhard1000
Interesting! What’s the source for the second paragraph?
I really appreciate this taxonomy and am amazed it has not existed before. Just thought it noteworthy that evolution/”natural selection” was not mentioned as an example (seems best categorized under Process Selection?)
This was helpful to me in better understanding stream entry/identifying good teachers, thank you! Do you have any other suggestions on how non-enlightened individuals might distinguish good teachers from bad, other than your specific examples of meditation centers and Tweets?
I heard “seems like [x] is a crux” at my STEM-focused workplace last week; I’m not aware of the speaker using LW.
one of the most promising young EAs in the country!
Curious, on what metric is this measured?
As a STEM enthusiast, I suspect I would’ve much more quickly engaged with the Sequences had I first been recommended arbital.com as a gateway to it instead of “read the Sequences” directly.
In your mind, in what ways does “being in the state of kensho 24/7” differ from “enlightenment”?
Got it, thank you! The cases I’ve noticed have indeed been from (what I believe to be) non-canonical sequences.
[Question] Is it possible to bookmark a whole sequence on LW?
Is the idea that
your “belief” you’re describing is a somehow unupdated ranked calibration with E[Draymond rank|Ben Taylor’s opinion] = E[Draymond rank] = ~50, whereas your model (which you consider separate from a true belief) would predict that the random variable “Draymond rank|Ben Taylor’s opinion” has mode 22, which is clearly different from your prior of ~50
your alief is that Draymond’s rank is ~50 while your System 2 level belief is that Draymond’s rank is ~22
or some combination of the two?
Interesting, thank you for sharing! As someone also newer to this space, I’m curious about estimates for the proportion of people in leading technical positions similar to “lead AI scientist” at a big company who would actually be interested in this sort of serendipitous conversation. I was under the impression that many in the position “lead AI scientist” at a big company would be either too 1) wrapped up in thinking about their work/pressing problems or 2) uninterested in mundane small-talk topics to spend “a majority of the conversation talking about [OP’s] bike seat,” but this clearly provides evidence to the contrary.
Could you elaborate on the “i somehow still managed to get on the flight” part? How long to get through security (in which line), and how many min before departure did boarding close?
Seems very related to this post from the sequences on fitness of people of numerical ages correlating more with imagined emotional anguish resulting from such a death (at that age) than with experienced anguish actually following such a death. Maybe this is a more common phenomenon observable in other contexts too, but this was the only example that came to my mind.
[Question] How to tolerate boredom?
I’m confused about the extent to which the four simulacrum levels, as used on LessWrong, necessarily follow from one another in sequential fashion (that would’ve been suggested by their numerical order): is it intended that Level 3⁄4 follow strictly from Level 2⁄3 respectively, or might there be jumps in levels? This is part of a broader question about why the simulacrum levels are numbered in the first place.
An example to illustrate my confusion is the statement “You look great in that outfit!” I conceive of the aforementioned sequential, linear fashion as follows:
Level 1: (Speaker believes) Listener looks great in their outfit.
Level 2: Speaker wants Listener to behave as if (Speaker believes) Listener looks great in their outfit, for consequentialist reasons.
This might be intended as a white lie to please Listener.
Level 3: Speaker wants Listener to believe [Level 2 statement]. I.e., Speaker wants to signal that Speaker is the type of person (on the “team”) that would make the outfit-related statement for consequentialist benefits.
This may be because Speaker wants to signal their social competence in navigating Level 2 scenarios to Listener.
Level 4: Speaker wants Listener to behave as if [Level 3 statement]. I.e., for consequentialist reasons Speaker wants Listener to behave as if Speaker is the type of person to signal as such.
This may be because Speaker wants Listener to know that Speaker knows how to signal.
At this point, the statement has nothing to do with its semantic meaning since it’s used purely as a signal for one’s team signaling, which completes the simulacrum of the statement being completely detached from reality.
However, I can conceive of alternative intentions corresponding to Levels 3⁄4 that’d follow directly as perversions of Level 1 (rather than developing out of Levels 2⁄3 as intermediaries):
Level 3: Speaker wants Listener to believe Speaker is the type of person (on the “team”) that would make the outfit-related statement regarding their beliefs.
This may be because Speaker wants Listener to believe Speaker is nice/socially pleasant.
The key difference between this and the first Level 3 above is that in this case,
Level 4: Speaker barely understands English but wants to induce oxytocin release in Listener, and has cached that someone else saying these words has achieved this desired effect.
Alternatively, Listener is a dog/hamster/pet that has somehow been classically conditioned to behave well when the statement “You look great in that outfit!” is said, maybe because Listener’s owner feeds it better when Listener’s owner is in a good mood upon hearing this statement. Speaker thus says this to Listener to induce Listener to behave.
In either case, the statement has no semantic bearing to reality and is solely used for its consequentialist effects on the world.
I imagine that these models would both be useful in different contexts (especially the second Level 3 formulation), but was confused about why the word “Levels,” which to me implies an intended step-by-step sequencing, is used given that the intermediate levels don’t appear necessary to me; is it purely an artifact of Baudrillard’s original text using “Levels” in this way? For what it’s worth, in my mind my second model seems to correspond more to the 2x2 grid formation in which Levels 2 and 3 are both adjacent to Level 1.
*Every non-”(read more)” button works as intended for me; i.e. none of the other buttons do nothing. Apologies if that was unclear.
Just tried all other buttons, none on my end!
I click “(read more)” and it seems to have no function. On Windows 11/Chrome, reproducible.
Any updates?