Thank you. If I’m following correctly, then what I’ve been taking is 200 mg of magnesium itself, embedded in ~2 grams of magnesium glycinate. If so, then I was misinterpreting the label.
Matt Vincent
That’s a good point, although it still leaves a few hypotheses on the table:
The improvement was due to the magnesium, not the glycine.
I had a severe glycine shortage.
I misinterpreted the label. (i.e. 200 mg is actually the mass of the magnesium itself, so the amount of glycine is unknown.)
If I’m interpreting the label correctly, then it’s 200 mg of magnesium glycinate per day. The active ingredient is labeled as “Magnesium (as Magnesium Glycinate) 200 mg”.
A couple of months ago, one of my doctors prescribed magnesium glycinate to alleviate visual migraines, but she said that it might also improve sleep. Happily, it helped with both. (...and several other conditions!)
The doctor implied that the primary benefit of magnesium glycinate over other magnesium supplememts was its gentler gastrointestinal effects, so I assumed that the improvement to my sleep was caused by the magnesium, not the glycinate. This essay caused me to question that assumption.
This essay is much more interesting than the title indicates! (I hope you take that as a 90% compliment, 10% criticism. I’m rather bad a promoting my own work, so I’m not trying to condescend.)
This link is broken.
Other commenters have added reasonable caveats for situations in which it’s okay for the conversation stack to grow tall, but I’ll add a buttress to your main point: when debating something important outside of rationalist circles, the stack size should be limited to 1, with 2 being a privilege. Only debate one point at a time, and choose the point that’s easiest to verify or falsify.
On YouTube, Reddit, Substack, and face-to-face, I’ve found that debating multiple points before the first point has been resolved usually results in continuously-shifting goalposts, or the interlocutor ignoring whatever I’ve said about one point to focus exclusively on the point where my response seems weakest.
I’ll emphasize that I consider this to be especially important when debating non-rationalists, because their pride encourages them to change subjects to show that they’re right about “the thing that actually matters”, and because the soldier mindset discourages them from acknowledging your strongest points.
This standard might sound harsh, and it does kill some conversations quickly, but I think that’s preferable to writing a comprehensive reply that accounts for all of their explicit and implicit claims, only for none of them to be acknowledged. In both cases, nobody is convinced of anything—but in the first case, at least I didn’t waste an hour or more.
When debating with rationalists, I’m much more lenient about discussing two or three points concurrently, because I trust that 1) they’ll give credit where it’s due, and 2) even if they appear to have changed the subject, they’re investigating a crux, and they plan to backtrack to the original point before long.
In this post, Scott Alexander makes a good case for correcting people, but he also provides a few guidelines for determining what counts as nitpicking and what counts as rigor. Here are his caveats:
Let someone be a little wrong if their impact is small and they are not in a mood to debate.
Allow oversimplifications, figures of speech, and misnomers for pedagogical and artistic purposes, unless you can explain why an argument hinges on their choice of terms.
If someone dismissively accuses you of nitpicking, instead of explaining why your distinction isn’t relevant, then they’re bullying.
Despite my attempt to summarize his post, it’s worth reading in its entirety.
I guess that I underestimated his influence among rationalists. Thanks.
Thanks, but why is he “a disaster for rationalist kind”? Does he have significant influence among rationalists (I’d find that surprising), or has he made the general public more tolerant of untruthfulness and, therefore, harder for rationalists to appeal to? (True, but too weak to justify “disaster”.)
I doubt that these jobs will keep the unemployment rate anywhere near the historical average, but I am confident that these jobs will survive.
I have two counterpoints to your claim about economic consumption:
-
In this case, the absolute measure of humans’ consumption matters more than the relative measure. Unless the overwhelming majority of humans will live in far worse circumstances than they do today, billions of humans is a large-enough customer base to warrant marketing research.
-
On a per-capita basis, AIs consume far less than humans do. That’s precisely what makes them cheaper to employ. Even if AI agents will outnumber humans by a factor of 1,000, I doubt that their total consumption would exceed humans’. (AIs might purchase things to increase their own productivity, but that’s capital investment, not consumption. If they really were consuming in vast amounts—e.g. collecting paperclips just for the heck of it—then I’d question whether we’ve really solved the alignment problem.)
-
Would you please explain your comment to those of us who are unfamiliar with him?
I do not understand the real point of this post.
Ooh, I think that the weight-lifting analogy is very apt. Thanks for the insight.
Many artists prefer to use live models, instead of images, as their references. If that wasn’t true, then live modelling would have died with the advent of the Internet—if not the camera—but it hasn’t. I’m not sure why artists have this preference, but they demonstrably do.
The author did use comparative advantage as an example, although I do think that the example’s implications should have been more explicit.
I think it’s worth considering the ways in which adults could be employable in a world with AGI. I can think of a few examples of adults being paid to be humans:
Marketing research studies
Figure modelling
Medical challenge studies
Athletic competitions*
Food criticism*
Also, I expect that a few careers will remain extremely resistant to AI adoption, regardless of how sophisticated the AI becomes, due to taboos against AIs being in positions of great authority over humans:
Pastors, priests, etc.
Politicians
Primary childcare-giver (i.e. parenting)
Police officers (maybe)
I’d be happy for someone to expand these lists.
*Added during an edit.
Suppose that you disagree with 80% of the people around you about a particular belief, but you’re correct. If the belief is complicated, with lots of supporting premises and independent lines of evidence, then it’s difficult to think about rigorously, so you’re likely to rely on heuristics.
In this case, there are at least two heuristics that will push your belief toward falsehood:
Social desirability bias. Unless you’re unusually contrarian, you’ll face psychological pressure to agree with the people around you.
Availability bias. Because people tend to list arguments that support their conclusion, you’ll be exposed to opposing and supporting arguments at a ratio of 4:1. Because people tend to treat things that are easy to recall as more likely to be true (the availability bias), you’re likely to give the opposing side more credit than it’s due.
In both cases, but especially the second, counteracting the biases requires you to expend effort to generate supporting arguments.
Christians are an ingroup? Tell that to any Christian living outside of the American South. Ingroup/outgroup statuses are context- and scope-dependent.
Over the last 10-20 years, Christians (particularly fundamentalists) have had very little involvement with cutting-edge AI, both on the technical side and the business side. In this sense, they’re an outgroup of the people who are likely to control ASI.
I second HoVY’s points. The other point that should have made it into the title, in my opinion, is that medical interventions should target problems, not indicators of problems. (Your application of the idiom “don’t shoot the messenger” was especially clever, and I think you could have based the title around it.)
I was also impressed by your disentanglement of two additional means by which glycine prevents or lessens fevers, and your identification of this as a Gettier case, but I doubt that it belongs in the title.