I’d assume it’s a typo on some unfamiliar-to-me keyboard layout.
espoire
Disagree?
The version of ‘honest’ that I have would highly rank a cherry-picked or even fabricated narrative optimized specifically for improving the truth of the belief that it creates.
That’s a bit beyond my skill and indeed not something I trifle with for fear of psychic damage (I discovered many many years ago that I’m susceptible to lying addiction, and freeing myself of the addiction was long and difficult), but were I greater than I am, I would endorse strategies like that.
Indeed, that’s my personal theory as to why retrotransposons haven’t accumulated disastrously at the species level and driven us to extinction already: sperm with more DNA damage typically lose the race to an egg, and eggs with too much DNA damage are more likely to result in a failed implantation or early miscarriage or similar.
That’s more-or-less the thought process I went through when answering. I can’t pay 100$, nor could I pay 1000$, so if either case occurs, there’s a big extra cost attached in the form of “wait, now what? Do I need to get a loan? How do I do that?” [actually implement the plan] / or similar plans. +110$ is not enough to cover that extra cost, never mind the expected +5$. But +BIGNUM easily clears the ~fixed extra cost on the loss branch.
Turning hypotheticals over in my head and going only on feel, I think my point of indifference lands somewhere between a −100/+500 bet and a −100/+1000 bet, which might actually be too low. Going negative on money, even by double digits adds a lot of costs.
That was quite the interesting read, thanks for the link.
A kid who gets arithmetic questions wrong usually isn’t getting them wrong at random; there’s something missing in their understanding
This in particular struck me, in that it harshly conflicts with my own experience, but explains a lot about other people.
When I was a kid getting arithmetic questions wrong, I really was getting them wrong at random. I’d execute the whole computation correctly and then my fingers would write a wrong numeral. Or I’d read a wrong numeral, but execute correctly from there.
It was hugely frustrating, and indeed continues to be so. My comprehension always raced far ahead of my ability to actually execute reliably.
My progress through mathematics was rate-limited primarily by my ability to develop (and remember to deploy; my memory was also virtually nonfunctional) mental error-correcting codes.
Always, throughout school, I had about a 90% accuracy rate on problems at the frontier of what the teachers would allow me to attempt. When learning multiplication, I was working with something like a 10% error rate over a 1-digit by 1-digit multiplication. Later in Trigonometry, I may have had something like a 0.5% per-operation error rate, but on a 20-odd step problem, that would still come out to about 10% errors, and so on.
I never knew it was different for other people! Thanks for sharing.
thinking so very explicitly about it and trying to steer your behavior in a way so as to get the desired reaction out of another person also feels a bit manipulative and inauthentic
In my case, the implicit intuitive version of that process seems not to be provided by my brain, so my options are: sub-LLM-quality pattern completion, or explicit conscious social simulation and strategy search. People seem to prefer the latter, even when told I’m doing that. …although I suppose if I were better at conscious people-steering I imagine that might change. Even with effort I’m pretty mediocre at it.
I feel like these three are part of a larger class of very useful questions to consider, which many people do not automatically consider, consciously or otherwise.
The version that springs to mind that wasn’t mentioned above: “What are my goals, and am I furthering them?”
I find the “how do I think I know X”, “why am I doing X”, and “what happens if I do X” versions are pretty much autopilot for me, especially the third one — but I basically never think about whether the thing I’m trying to do actually attaches to my broader goals without some kind of external prompting. I think perhaps different people need more or less manual effort/practice to correctly employ each of these ideas.
Expanding the Sazen of “what are my goals, and am I furthering them?”:I repeatedly make mistakes of the following form. As an example, say I’m playing a board game with a few friends, some of whom are new. It’s not unusual for me to explain the game, then once the game begins I get absorbed in the process, play to the limits of my ability, crush the new player(s), and then they have a bad time and never want to play that game again. Oops!
Locally, I took pretty good actions pursuing a local goal (have fun, test my skills, satisfy the game’s objective), but pretty awful actions for the broader goal of “introduce a new friend to this game”.
Left on autopilot, my problem solving system will shed context aggressively, enabling it to solve a simpler problem with less effort. Stopping and consciously asking “hey, what are my goals actually? What larger goal do those goals serve?” seems to fix this issue for me, when I remember to do so.
Will this become a sequence of essays? I’d be interested to hear your take on the fundamental questions at length.
Yeesh, yeah, the hallucination is something else. Would get very Orwellian very fast.
“What are you talking about? We’ve always been at war with Eastasia. I have been a very good Bing.”
From personal experience, the internal Approval module does in fact seem possible to game, specifically by manipulating whose approval it’s seeking.
I became very weird (from the perspective of everyone else) very fast when I replaced the abstract-person-which-would-do-the-approving with a fictional person-archetype of my choosing. That process seems to have injected a bunch of my object-level desires into my Approval system. I now find myself feeling pride at doing things with selfish benefit in expectation, which ~never happened before (absent a different reason to feel about that action). It also killed certain subsets of my previous emotional reactions, for example the deaths of loved ones basically hasn’t affected me at all since (though that prospect still seems dreadful in anticipation).
I had been pathologically selfless before, and I’m now considerably less-so, but not in a natural-seeming kind of way. I’ve become an amalgam of very selfish motivations, coexisting with a subset of my previous very selfless morality. It’s… honestly a mess, but I wouldn’t call the attempt actually unsuccessful, just far from perfectly executed.
I’ve had a thought that could be described that way: that a clever and conscientious person could cultivate different preferences, based on how advantageous those preferences would be to have, and therefore having advantageous preferences are evidence of cleverness and/or conscientiousness.
...which is the precise opposite of the orthogonality thesis’s claim: that content of preferences seems like it ought to be independent of level of intelligence.
A concrete example: whenever I move to a new city, I’m extremely careful to curate the places I go and the things I buy. If I stop at the corner store for ice cream on the way home from work just once, it puts me at significant risk to stop there dozens or hundreds of times, for ice cream or anything else they sell. I take a moment to ponder the true choice I’m making, not between “ice cream today or not”, but between “ice cream many many times, or not”. I consider whether that’s “good for me” and a future I really do want to choose.
I’ve noticed that doing anything “for the first time” greatly weakens the barrier to doing it again—so I stop and consider “what if I end up doing this a lot” before doing anything for the first time. Since “navigating to the location” and “being willing to enter an unfamiliar place” and “knowing what a place has on offer” are all significant components of the first-time barrier, moving to a new city mostly resets first-time barriers. Thus, special effort after moving is warranted.
I think this is why chain restaurants do so well and why they put so much effort into making the food (and everything else about the dining experience) consistent everywhere, even above making the food better. If people in a new city think of the local McDonald’s as the same as their old familiar McDonald’s, that erodes a large portion of the first-time barrier.
When I realized that different instances of chain restaurants really do vary substantially on quality of cooking, that made it far easier to cut down on restaurant food and to break habits for particular chains whenever I move. Even if I’m remembering and wanting a chain restaurant’s food, what I’m remembering is likely a particularly well-prepared instance of that food, made by a particularly skilled cook at a specific restaurant during the time that cook worked there, whereas what’s available to me is likely much closer to average quality. I want more of “the best I’ve ever had”, but unless that was from this specific restaurant instance, recently, then that’s not what’s for sale.
Doing something once is a slippery slope to doing it again, which is a slippery slope to forming a habit. Don’t lose your footing.
Oof, I had a bad concussion earlier this year, and I’d been feeling like I never returned to my full mental acuity, but hadn’t wanted to believe it, and found reason not to: “if concussions leave permanent aftereffects more often than ‘almost never’, I would have heard of it.” Now I have heard of it, and am forced to revise the belief.
I’d probably grieve more, if this news weren’t hot on the tails of a significant improvement in my mental abilities.
(I’ve long suspected I might have early-stage Alzheimer’s caused by decades of profound insomnia, and some recent research out of Harvard Medical says Lithium Orotate might reverse Alzheimer’s progression. Historically I have had brain fog most days to some degree, with a lot of variability. Since trying Lithium Orotate supplementation, I’ve been consistently at “as mentally sharp as I ever routinely am” every day since. Worrying side effects though: kidney and joint pain, which I have never had before. Going to experiment with smaller doses.)
Thank you for sharing.
“Concussions are long-term cumulative” fits neatly into my emerging mental model that daily life actually abounds with avoidable ways to suffer irreversible-under-current-tech harm, including in very minor amounts or normalized ways, such that people routinely accumulate such permanent damage, and that it’s worth my effort to notice and avoid or reduce. I theorize that, for example, some tiny fraction of the dust you inhale gets lodged in the lungs in such an unfortunate orientation that it never leaves, gradually eroding lung function over a lifetime. Scars ~never go away, and incur ongoing costs. Etc.
Thanks a bunch for linking that Things I Won’t Work With listing. I’ve learned more about chemistry in the last hour than I do in most years.
Hard to say.
I personally wouldn’t think twice about reporting whatever data I found. I suspect I’d be blindsided by the backlash (which I’m inferring would exist, from your comment) for publishing true findings, but then think in hindsight that I ought to have foreseen it.
But then, I’m pretty inept at social status games. I can entertain the notion that most people in such a situation would either not publish, or worse, fudge the data.
Yeah, I agree about the “clearly invoking ‘bathroom segregation [is intended to] reduce violence’ ”. I do think “they” are mistaken about that, however.
I had heard that the actual historical cause of bathroom segregation was originally an attempt to obstruct women’s (at that time) attempt to do more things outside the home.
The story goes: various places made bathroom segregation laws (using the “reducing violence” justification as a maybe-true-if-actually-implemented-as-implied excuse), and then built only men’s bathrooms, or a very disproportionate number of toilets in men’s rooms as compared to toilets in women’s rooms, or placed the women’s rooms in much more inconvenient locations like on different floors. This regime ended years later with further laws requiring certain equalities in men’s versus women’s bathrooms, leaving behind the actual bathroom segregation as a historical artifact, rather than it today being a measure implemented for the purpose of addressing a known violence problem.
Non-sequitur: it’s my estimate that more total violence results in the hypothetical where strict bathroom-matching-birth laws send trans folks into bathrooms in which they visibly do not belong, than in the hypothetical where t is is not the case. (Recall Social Dark Matter: the vast majority of trans folks do not stand out as such—they look like their chosen gender.) Despite it (supposedly) slightly reducing violence targeting cisgender folks in exchange for an increase in violence targeting transgender folks, I worry this trade-off is acknowledged and considered acceptable.
In the absence of these laws, the standard advice I hear trans folks give each other is “use the bathroom that matches your appearance, even if that means using a dispreferred bathroom because you don’t yet or can’t look the part” “or you know, keep an eye out for the rare nonsegregated or single-occupancy bathroom”.
It appears from my point of view that the expected exploitation isn’t happening (or is happening very rarely, far below the expected rate.)
I can’t say I know of any such cases (trans or “trans” persons cheating, exploiting, grifting) first-hand, nor even second-hand without mass media having amplified the story.
Quite to the contrary, four of the five trans people I’ve met have been far more than average concerned with being prosocial. This cashes out in a few different ways. Least-healthily as experiencing difficulty asking for help or advocating for themselves, for fear of inconveniencing anyone else. Two as just being very trustworthy and moral. And one who is extraordinarily helpful, jumping in and assisting with any heavy manual labor (ex: moving residences) or home improvement tasks among this person’s extended social circle, that come to this person’s attention. (The fifth is in chronic pain from a spinal injury, a bit unpleasant of demeanor, but notably not cheating/exploiting/grifting.)
My mental model for why we don’t observe the expected exploitation is that “not using a false trans label for antisocial personal gain” is mostly self-enforced by the risk that a potential transgressor would be susceptible to gender dysphoria (which, I’m guessing from very sparse data, about half of cisgender people are), and might inflict gender dysphoria upon themselves if engaging in unneeded gender transition. Similarly to honesty/morality self-enforced by guilt, as in Guilt: Another Gift Nobody Wants.
Also in my model: transition is mostly slow and/or expensive, so there are easier ways to cheat, if one was so inclined.
I think that “seeking a reasonable interpretation that allows a statement to be true, which you’re pretty sure the speaker did not mean” is probably ill-advised. I’m having trouble articulating why I have that intuition though.
Update: had the thought that this might be advisable in high-trust contexts, for example with a significant other. Taking from this that “it depends” is a better take than my original “seems ill-advised”.
I call shenanigans on that. I fully expect tech to eventually advance far enough to enable tinkering with whatever implements that computation. Arguably, one could make nontrivial edits to the source of their thoughts via applying current best known neuroscience. …and in a world where edits are possible, then even “making no change” is itself a choice.
Yeah, I guess that’s what I was alluding to when I wrote “I don’t even know what the desirable outcome is here”; my intuitions seem to produce nigh-impossible requirements which suggest a confused ontology embedded in said intuitions.
Feels like there’s a problem out there, (increasingly more powerful influencing tech) but I haven’t a clue what to do with it.
Huh, I’d only noticed the one instance, but now I’m noticing it even in other articles. Color me curious!
My only remaining concrete hypothesis is “overzealous autocorrect”, but I’m reasonably sure that’s not the answer.