I find that sort of feedback more palatable when they start with something like “This is not related to your main point but...”
I am more OK with talking about tangents when the commenter understands that it’s a tangent.
I find that sort of feedback more palatable when they start with something like “This is not related to your main point but...”
I am more OK with talking about tangents when the commenter understands that it’s a tangent.
I wonder if there’s a good way to call out this sort of feedback? I might start trying something like
That’s a reasonable point, I have some quibbles with it but I think it’s not very relevant to my core thesis so I don’t plan on responding in detail.
(Perhaps that comes across as rude? I’m not sure.)
I realize I got to this thread a bit late but here are two things you can do:
Pull-up negatives. Use your legs to jump up to the top of a pull-up position and then lower yourself as slowly as possible.
Banded pull-ups. This might be tricky to set up in a doorway but if you can, tie a resistance band at a height where you can kneel on it while doing pull-ups and the band will help push you up.
When the NYT article came out, some people discussed the hypothesis that perhaps the article was originally going to be favorable, but the editors at NYT got mad when Scott deleted his blog so they forced Cade to turn it into a hit piece. This interview pretty much demonstrates that it was always going to be a hit piece (and, as a corollary, Cade lied to people saying it was going to be positive to get them to do interviews).
So yes this changed my view from “probably acted unethically but maybe it wasn’t his fault” to “definitely acted unethically”.
people have repeatedly told me that a surprisingly high fraction of applicants for programming jobs can’t do fizzbuzz
I’ve heard it argued that this isn’t representative of the programming population. Rather, people who suck at programming (and thus can’t get jobs) apply to way more positions than people who are good at programming.
I have no idea if it’s true, but it sounds plausible.
On the note of wearing helmets, wearing a helmet while walking is plausibly as beneficial as wearing one while cycling[1]. So if you weren’t so concerned about not looking silly[2], you’d wear a helmet while walking.
[1] I’ve heard people claim that this is true. I haven’t looked into it myself but I find the claim plausible because there’s a clear mechanism—wearing a helmet should reduce head injuries if you get hit by a car, and deaths while walking are approximately as frequent as deaths while cycling.
[2] I’m using the proverbial “you” in the same way as Mark Xu.
Just last week I wrote a post reviewing the evidence on caffeine cycling and caffeine habituation. My conclusion was that the evidence was thin and it’s hard to say anything with confidence.[1]
My weakly held beliefs are:
Taking caffeine daily is better than not taking it at all, but worse than cycling.
Taking caffeine once every 3 days is a reasonable default. A large % of people can take it more often than that, and a large % will need to take it less.
I take caffeine 3 days a week and I am currently running a self-experiment (described in my linked post). I’m currently in the experimental phase, I already did a 9-day withdrawal period and my test results over that period (weakly) suggest that I wasn’t habituated previously because my performance didn’t improve during the withdrawal period (it actually got worse, p=0.4 on a regression test).
[1] Gavin Leech’s post that you linked cited a paper on brain receptors in mice which I was unaware of, I will edit my post to include it. Based on reading the abstract, it looks like that study suggests a weaker habituation effect than the studies I looked at (receptor density in mice increased by 20–25% which naively suggests a 20–25% reduction in the benefit of caffeine whereas other studies suggest a 30–100% reduction, but I’m guessing you can’t just directly extrapolate from receptor counts to efficacy like that). Gavin also cited Rogers et al. (2013) which I previously skipped over because I thought it wasn’t relevant, but on second thought, it does look relevant and I will give it a closer look.
The contextualizer/decoupler punch is an outstanding joke.
Based on your explanation in this comment, it seems to me that St. Petersburg-like prospects don’t actually invalidate utilitarian ethics as it would have been understood by e.g. Bentham, but it does contradict the existence of a real-valued utility function. It can still be true that welfare is the only thing that matters, and that the value of welfare aggregates linearly. It’s not clear how to choose when a decision has multiple options with infinite expected utility (or an option that has infinite positive EV plus infinite negative EV), but I don’t think these theorems imply that there cannot be any decision criterion that’s consistent with the principles of utilitarianism. (At the same time, I don’t know what the decision criterion would actually be.) Perhaps you could have a version of Bentham-esque utilitarianism that uses a real-valued utility function for finite values, and uses some other decision procedure for infinite values.
Ok, fair point, I was going too far in assuming that the sort of engineering necessary was physically impossible.
I think the evidence against (most) miracles is stronger because they violate the laws of physics. Although I think the same could be said for a few UAPs—if a UAP moves in a way that is physically impossible as far as we know, that’s strong evidence against it being aliens, because aliens still have to follow the laws of physics.
How would a tic-tac to accelerate at 700g with no visible propulsion, even positing the existence of super-advanced technology? The best I can think of off the top of my head is that it’s using an extremely strong magnet to manipulate its position relative to earth’s magnetic field. But that would require an absurd amount of energy so it would probably need to be powered by a tiny cold fusion reactor (which may be physically impossible), and it would still need to avoid emitting noticeable amounts of heat, and even if it has some sort of hyper-insulating shell, it would need internal parts that don’t evaporate under that much heat, and also need to avoid emitting the massive amount of heat that would be generated by friction with the air.
To add more on “what we don’t see”: if some UAPs are aliens, why have they been on earth for decades, but they haven’t done anything yet other than fly around? Why have they never landed (or, if they’ve landed, why did they only land at secret military bases)? My prior is that if intelligent aliens visited earth, they would do one of two things:
They arrive in force, and their presence quickly becomes undeniable.
Their scouts arrive and fly around for only a short time.
It seems a lot less likely that they’d arrive, fly around for decades, get spotted several times, but only ever in the distance.
It’s weird that the US has such a low price to income ratio and thus such a high rental yield. In an efficient market, real estate investors should flock to countries with high rental yields, buying up housing until rental yields equalize. Why hasn’t this happened yet?
I was concerned about the competing standards problem but your comment solves the issue
If you disagree but can’t succinctly explain, I would suggest doing one of these things:
Write a long comment explaining your disagreement
Write a short comment stating your specific points of disagreement, with a disclaimer that you don’t have time to fully justify your beliefs
Your comment is being downvoted (I suspect) because it does neither of these, instead it indirectly insults the author without providing any information as to why you disagree. IMO this sort of comment doesn’t really contribute anything—all I know is that you disagree, I have no idea what’s going on inside your head, so I’m not learning anything from it.
Perhaps it’s worth distinguishing between two types of “I don’t know”:
I don’t know because I haven’t put any thought into it. (This is the type of “I don’t know” that teachers rightly discourage.)
I don’t know because I have considered several hypotheses, and none of them explain my observations. (For example, my mental model of heat conduction predicts that the close side of the plate should be hotter, not the far side, so that explanation fails.)
Perhaps teachers should encourage students to replace “I don’t know” with “my mental model predicts A, but I observe B”, which communicates that the student is thinking correctly about the problem.
One concern I have with this method is that it’s greedy optimization. The next character with the highest probability-of-curation might still overly constrain future characters and end up missing global maxima.
I’m not sure the best algorithm to resolve this. Here’s an idea: Once the draft post is fully written, randomly sample characters to improve: create a new set of 256 markets for whether the post can be improved by changing the Nth character.
The problem with step 2 is you’ll probably get stuck in a local maximum. One workaround would be to change a bunch of characters at random to “jump” to a different region of the optimization space, then create a new set of markets to optimize the now-randomized post text.
Thanks for the reply. If I’m understanding correctly, leaving aside the various complications you bring up, are you describing a potential slow growth curve that (to a rough approximation) looks like:
economic value of AI grows 2x per year (you said >3x, but 2x is easier b/c it lines up with the “GDP doubles in 1 year” criterion)
GDP first doubles in 1 year in (say) 2033
that means AI takes GDP from (roughly) $100T to $200T in 2033
extrapolating backward, AI is worth $9B this year, and will be worth $18B next year
This story sounds plausible to me, and it basically fits the slow-takeoff operationalization.
Fortified milks usually don’t contain much iron. The soymilk in my fridge (Silk unsweetened) has 120% RDA of B12 but only 6% RDA of iron.
Have there been any great discoveries made by someone who wasn’t particularly smart?
This seems worth knowing if you’re considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?