Sunny from QAD
Throw in some dice with that lampshade and I’ll jog right over to sign the peace treaty.
LAMPSHADE + DICE = MILD DASH + PEACE
I calibrated my “strong upvote” on this post. It sounds silly now, but
Scientists can also be unstupid. Someone else has already thought of your alternative interpretation.
was a revelation for me.
Further, I would generally say that the types of people who make attacks are cunning but unimaginative.
Why? Up until this point I thought the crux of your argument was that people would commit an attack are generally unintelligent. Why would they have cunning specifically but no imagination?
Thanks for the feedback. The true improvement steps certainly can be enjoyed—in fact, that’s what I like about being a programmer! The effect of everything-feeling-like-a-tradeoff should be strongest when looking at the available solutions to well-known problems such as primality testing or sorting algorithms, where many people before us have already expended a lot of energy pushing the boundary outwards. History forgets the solutions that got dominated, and we are left with a trade-off solution set.
Your last sentence makes me think you read:
Throwing solutions out is easy (and is sometimes done subconsciously when you’re working in your domain of expertise …)
as referring to the moment that one discovers a new solution and throws out the ones that it dominates. I see why you read it that way. I’ll edit the wording to make it clear that I mean throwing out solutions that are just obviously very poor. (Initially I had another sentence here where I said that deliberately comparing these solutions to the good ones could help mitigate the everything-is-a-tradeoff feeling, but then I noticed that you’ve already said the same thing! Though I’ll note that you can still appreciate the gap between the bad ones and the good ones long after you’ve made your choice, which doesn’t result in slowness.)
Without giving away too much “insider info”, at my current job I’m working on some software that models the interaction of some different 1-dimensional surfaces. I represent these surfaces using splines, which are piece-wise polynomial functions. When I need to know the y-position of a surface at a given x-position, I evaluate the appropriate polynomial at that x-position to find it.
Although if a single solution was by far the best under every metric, there wouldn’t be any tradeoffs.
Yes, I debated mentioning this in the post. If a single solution was the best under any metric, then that solution would quickly fade into the background, I think. When I get dressed in the morning, I don’t celebrate the fact that putting on socks before shoes is both easier and more comfortable than the reverse!
A more sensible way to code this would be [...]
I haven’t tested it, but that involves extra multiplication for computing as and for multiplying numbers together to get the factorial values. But I haven’t tested it! Maybe I’ll try that today and see if it really is as fast. (The function gets called ~1.7 million times, so even a small difference will be worth keeping the faster code.)
They are good days indeed! As for your second point, I whole-heartedly agree. I was trying to allude to this in my last two sentences, especially the part in parentheses:
Throwing out obviously-poor solutions is easy (and is sometimes done subconsciously when you’re working in your domain of expertise, or done by other people before a decision gets to you), but weighing the remaining options usually takes some consideration. So, our subjective experience is that we spend most of our time and energy thinking about trade-offs.
But I think I should have dedicated more time to it. Originally I had a sentence in there that was something like “when I get dressed in the morning, I don’t celebrate the fact that putting my pants on forwards instead of backwards is both easier and more comfortable—I just do it that way automatically!”. You’ve expressed it better than I have, so I’m going to add your paragraph into the main post (credited to you of course).
From the post:
It reminds them of an experience they might want to forget. Further, it requires them to deal with a topic they may be completely sick and tired of.
From the comment above me (emphasis mine):
Apologies communicate knowledge of harmful behavior, ideally in a way that lets the victim understand and get closure on the incident. They help in reducing attribution bias (where people assume you’re a jerk, rather than a fallible human).
I’ll note that this means that an apology can turn an experience one wants to forget into a completely tolerable one. If someone shows up late to a bunch of meetings and acts disrespectful while they’re there, I’ll be annoyed at them and find our interactions unpleasant in the future, even if don’t act out anymore. But if they then say “Sorry about last week, I was having a rough time and I let my emotions get the best of me. I’m not going to act like that in the future” then the experience of “this person was a jerk, which I find unpleasant” is retroactively transformed into “this person was going through something, which happens to all of us”
I’m putting this in a separate comment from my reply to Dagon, though it’s a similar thought to Dagon’s first paragraph. From the post:
If you apologise, it should be because it helps prevent or mend a rift with the other person. You should be extremely cautious about apologising because that’s what you think a nice person would do, as those are precisely the situations where you are likely to end up apologising with no benefit to anyone.
I don’t think there are many scenarios where an apology wouldn’t help mend a rift with someone. Unless maybe you mean giving multiple apologies for the same action? To me, a first apology would serve a person very well in 99% of cases. (I can think of maybe one case from my personal life where I wouldn’t be interested in an apology.) Of course, other people might be different.
Also, consider the balance of outcomes. An unnecessary apology is an inconvenience to me; if someone has sinned against me so badly that an apology doesn’t do anything, the difference between an additional inconvenience and no additional inconvenience is nothing. But if I consider an apology necessary, the difference between making it and skipping is big, and sometimes it’s huge. So, while I agree with the shape of what you’re saying (“some apologies are worse than nothing”) I wouldn’t advise anyone to be “extremely cautions about apologizing”. Quite the contrary, I’d advise extreme caution about not apologizing—that needs to be saved for when you’re sure the situation is unsalvageable.
If anyone scrolled down without taking the survey and is now looking at this comment, please take the survey! All of us need to take it the moment we see the post, lest johnswentworth suffer the effects of a sampling bias.
Whoa! I wasn’t expecting so much of a difference. Did you use ```for i in range(...)``` for your loop? That range() call returns an iterator, which I imagine isn’t too fast.
I liked this a lot, thank you! My understanding of Bayes’ Theorem has always been a little shaky, and I think that this sured things up for me.
Glad to hear it! I can never remember exactly how it works unless I picture a bar chart like this, so I thought I’d share it.
One thing that I think would improve this post would be to have used a practical example.
I’m happy to oblige! I’ve edited the post to include an example.
I might suggest doing all of the chocolate days in a row, since it might be the case the consistently eating chocolate before bed has an effect that inconsistently eating chocolate does not have.
Then again, that might open the experiment up to a skew—maybe it will be rainy for a week that’s contained entirely in the no-chocolate timezone and that will throw off the results.
I welcome any thoughts on this.
Edit (after a conversation below): By the way, I love that you’ve thought to a) do this experiment and b) pre-register it publicly! When I saw this in my feed I thought “aha, that person is being a good scientist”.
This comment reads a bit like you think I was attacking the poster. Did I come off that way?
Edit: In particular, my comment was a response to:
All feedback is appreciated!
and it’s possible you saw in my comment the phrase:
[...] and that will throw off the results.
but didn’t notice this (or didn’t parse it the way I intended it to be parsed):
maybe it will be rainy for a week that’s contained entirely in the no-chocolate timezone and that will throw off the results.
No worries! I think I also need to work on my tone, as I sometimes point out tiny little details that I think could be improved without pointing out my overall positive feelings. I’ve done this pretty extremely here, so I’m going to go back and edit my original comment so that it more accurately reflects how I feel. Thanks!
The ball-on-a-hill model of reputation
This is a model I came up with in middle school to explain why it felt like I was treated differently from others even when I acted the same. I invented it long before I fully understood what models were (which only occurred sometime in the last year) and as such it’s something of a “baby’s first model” (ha ha) for me. As you’d expect for something authored by a middle schooler regarding their problems, it places minimal blame on myself. However, even nowadays I think there’s some truth to it.
Here’s the model. Your reputation is a ball on a hill. The valley on one side of the hill corresponds to being revered, and the valley on the other side corresponds to being despised. The ball begins on top of the hill. If you do something that others see as “good” then the ball gets nudged to the good side, and if you do something that others see as “bad” then it gets nudged to the other side.
Here’s where the hill comes in. Once your reputation has been nudged one way or the other, it begins to affect how others interpret your actions. If you apologize for something you did wrong and your reputation is positive, you’re “being the bigger person and owning up to your mistakes”; if you do the same when your reputation is negative, you’re “trying to cover your ass”. Once your action has been interpreted according to your current reputation, it is then fed back into the calculation as an update: the rep/+ person who apologized gets a boost, and the rep/- person who apologized gets shoved down even further.
Hence, “once the ball is sufficiently far down the hill, it begins to roll on its own”. You can take nothing but neutral actions and your reputation will become a more extreme version of what it already is (assuming it was far-from-center to begin with). This applies to positive reputation as well as negative! I have had the experience of my reputation rolling down the positive side of the hill—it was great.
There are also other factors that can affect the starting position of the ball, e.g. if you’re attractive or if somebody gives you a positively-phrased introduction then you start on the positive side, but if you’re ugly or if your current audience has heard bad rumors about you then you start on the negative side.
I’d be curious if anyone else has had this experience and feels this is an accurate model, and I’d be very curious if anyone thinks there is a significant hole in it.
Yes, do it for posterity!
(In particular, people can have different kinds of reputation in different domains)
That’s true. I didn’t notice this as I was writing, but my entire post frames “reputation” as being representable as a number. I think this might have been more or less true for the situations I had in mind, all of which were non-work social groups with no particular aim.
Here’s another thought. For other types of reputations that can still be modeled as a ball on a hill, it might be useful to parameterize the slope on each side of the hill.
“Social reputation” (the vague stuff that I think I was perceiving in the situations that inspired this model) is one where the rep/+ side is pretty shallow, but the rep/- side is pretty steep. It’s not too hard to screw up and lose a good standing — in particular, if the social group gets it in their head they you were “faking it” and that you’re “not actually a good/kind/confident/funny person” — but once you’re down the well, it’s very hard to climb out.
“Academic reputation”, on the other hand, seems like it might be the reverse. I can imagine that if someone is considered a genius, and then they miss the mark on a few problems in a row, it wouldn’t do much to their standing, whereas if the local idiot suddenly pops out and solves an outstanding problem, everyone might change their minds about them. (This is based on minimal experience.)
Of course, it also depends on the group.
I’m curious — do you have any types of reputation in mind that you wouldn’t model like this, or any particular extra parts that you would add to it?
critically different, scenario—namely, one in which your accuser knows, indeed, that you did not apprehend the consequences of your action… but believes that you should have known, and that the fact of your ignorance itself constitutes a blameworthy act of negligence.
Ah yes. My phrasing was weak, but this is what I meant by:
that my doing X was careless
I admit, my memories of these situations are hazy. They’re from my childhood, and nowadays it doesn’t really happen because the filter I place in front of my friend group doesn’t allow this sort of person through (e.g. the kind who actually fails to exhibit information empathy, not the kind who enforces the “ignorance of the law is no excuse” norm). The specific person I have in mind is the sort who might semi-consciously decide to enforce that norm, but then take it to an unwarranted extreme, blaming others for things they couldn’t possibly have known not to do. Then again, they are also somebody I may be biased towards finding faults in. It’s possible this has rarely/never actually happened to me, but I figured the term is still a good one to throw out there.
Another degree of freedom comes from the number of different topics there are where aliens are claimed to have meddled. I admit I can’t think of too many off the top of my head (crop circles? maybe 9/11?) but I’d be willing to bet there are at least 5 popular, serious claims to be had. That would bring the odds up to 1⁄60, or p = .017.
Maybe another degree can be squeezed out by saying that there are topics in pseudoscience besides aliens, and topics to house crazy coincidences besides pseudoscience. I’m not sure how this could be counted, though.
One last thought, it’s not necessary to get p up to anything like 1. Coincidences do happen, after all.