Got it, thank you! The cases I’ve noticed have indeed been from (what I believe to be) non-canonical sequences.
tryhard1000
[Question] Is it possible to bookmark a whole sequence on LW?
Is the idea that
your “belief” you’re describing is a somehow unupdated ranked calibration with E[Draymond rank|Ben Taylor’s opinion] = E[Draymond rank] = ~50, whereas your model (which you consider separate from a true belief) would predict that the random variable “Draymond rank|Ben Taylor’s opinion” has mode 22, which is clearly different from your prior of ~50
your alief is that Draymond’s rank is ~50 while your System 2 level belief is that Draymond’s rank is ~22
or some combination of the two?
Interesting, thank you for sharing! As someone also newer to this space, I’m curious about estimates for the proportion of people in leading technical positions similar to “lead AI scientist” at a big company who would actually be interested in this sort of serendipitous conversation. I was under the impression that many in the position “lead AI scientist” at a big company would be either too 1) wrapped up in thinking about their work/pressing problems or 2) uninterested in mundane small-talk topics to spend “a majority of the conversation talking about [OP’s] bike seat,” but this clearly provides evidence to the contrary.
Could you elaborate on the “i somehow still managed to get on the flight” part? How long to get through security (in which line), and how many min before departure did boarding close?
Seems very related to this post from the sequences on fitness of people of numerical ages correlating more with imagined emotional anguish resulting from such a death (at that age) than with experienced anguish actually following such a death. Maybe this is a more common phenomenon observable in other contexts too, but this was the only example that came to my mind.
[Question] How to tolerate boredom?
I’m confused about the extent to which the four simulacrum levels, as used on LessWrong, necessarily follow from one another in sequential fashion (that would’ve been suggested by their numerical order): is it intended that Level 3⁄4 follow strictly from Level 2⁄3 respectively, or might there be jumps in levels? This is part of a broader question about why the simulacrum levels are numbered in the first place.
An example to illustrate my confusion is the statement “You look great in that outfit!” I conceive of the aforementioned sequential, linear fashion as follows:
Level 1: (Speaker believes) Listener looks great in their outfit.
Level 2: Speaker wants Listener to behave as if (Speaker believes) Listener looks great in their outfit, for consequentialist reasons.
This might be intended as a white lie to please Listener.
Level 3: Speaker wants Listener to believe [Level 2 statement]. I.e., Speaker wants to signal that Speaker is the type of person (on the “team”) that would make the outfit-related statement for consequentialist benefits.
This may be because Speaker wants to signal their social competence in navigating Level 2 scenarios to Listener.
Level 4: Speaker wants Listener to behave as if [Level 3 statement]. I.e., for consequentialist reasons Speaker wants Listener to behave as if Speaker is the type of person to signal as such.
This may be because Speaker wants Listener to know that Speaker knows how to signal.
At this point, the statement has nothing to do with its semantic meaning since it’s used purely as a signal for one’s team signaling, which completes the simulacrum of the statement being completely detached from reality.
However, I can conceive of alternative intentions corresponding to Levels 3⁄4 that’d follow directly as perversions of Level 1 (rather than developing out of Levels 2⁄3 as intermediaries):
Level 3: Speaker wants Listener to believe Speaker is the type of person (on the “team”) that would make the outfit-related statement regarding their beliefs.
This may be because Speaker wants Listener to believe Speaker is nice/socially pleasant.
The key difference between this and the first Level 3 above is that in this case,
Level 4: Speaker barely understands English but wants to induce oxytocin release in Listener, and has cached that someone else saying these words has achieved this desired effect.
Alternatively, Listener is a dog/hamster/pet that has somehow been classically conditioned to behave well when the statement “You look great in that outfit!” is said, maybe because Listener’s owner feeds it better when Listener’s owner is in a good mood upon hearing this statement. Speaker thus says this to Listener to induce Listener to behave.
In either case, the statement has no semantic bearing to reality and is solely used for its consequentialist effects on the world.
I imagine that these models would both be useful in different contexts (especially the second Level 3 formulation), but was confused about why the word “Levels,” which to me implies an intended step-by-step sequencing, is used given that the intermediate levels don’t appear necessary to me; is it purely an artifact of Baudrillard’s original text using “Levels” in this way? For what it’s worth, in my mind my second model seems to correspond more to the 2x2 grid formation in which Levels 2 and 3 are both adjacent to Level 1.
*Every non-”(read more)” button works as intended for me; i.e. none of the other buttons do nothing. Apologies if that was unclear.
Just tried all other buttons, none on my end!
I click “(read more)” and it seems to have no function. On Windows 11/Chrome, reproducible.
I wonder if there’s any generalizable analogue to the quiet eye outside of sports contexts—it seems pretty important for the motor actions studied, but would we see the same results on tackling more intellectual research problems?
I actually find the font a bit hard to read: my System 1 brain took a noticeable split second (I’d estimate about 0.8 seconds) longer to process the words’ semantic meanings than it does with normal, all-lowercase text, or even with the titles of the other book covers at the Amazon link. This took long enough that I could see myself (i.e. my System 1) glossing over this book entirely when scrolling/looking through a page of books, being drawn to more immediately legible items.
Although the above might just be a quirk of my personal attention/processing style, I wonder if it’s worth experimenting with changes in font given this. I’d suspect my experience occurred due in part to the heavy font weight, since the title’s characters look less immediately distinguishable (and more blobby) than with lower weights. There are also a few very narrow spaces between adjacent words that probably complicate immediate word distinguishing. As mentioned above, the topic of AI also isn’t immediately clear within the title, which I’d worry might lose domain-interested readers if not understood semantically.
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
In your mind, in what ways does “being in the state of kensho 24/7” differ from “enlightenment”?