I think it turned out pretty well.
Well, that remains to be seen.
I think it turned out pretty well.
Well, that remains to be seen.
I guess whether > 3 mg/kg is a “lot” compared to other food types is relative to the number of food types the study considered.
I haven’t dug up the France study to see how many foods they looked at that didn’t make the >3 mg/kg cut, but the first study that I clicked on after searching Google scholar just now is a German study that found a median mg/kg of 160 for “cocoa powder” and 39 for “chocolate”. Of the 1,431 food samples they tested, “77.8% had an aluminium concentration of less than 10 mg kg-1. Of the samples, 17.5% had aluminium concentrations between 10 and 100 mg kg-1. In only 4.6% of the samples, aluminium concentrations greater than 100 mg kg-1 were found.”. Looking at the histogram in Figure 1, we can place chocolate’s median aluminum level of 39 in the top 13.7% percent or higher, and cocoa powder’s of 160 in the top 4.6% or higher.
I’m well aware of the irony that in my above post I suggested substituting cocoa powder for chocolate.
In particular, the study notes that “Table 4 shows that the PTWI for aluminium can be reached only by consumption of large amounts of chocolate [42–44].” (PTWI = provisional tolerable weekly intake used by the Joint FAO/WHO Expert Committee on Food Additives).
Are there plenty of other foods with as much aluminum as chocolate? Sure. Am I cutting chocolate out of my own diet anytime soon? No. But since the original poster is planning to take up chocolate consumption specifically for brain/intelligence -related reasons, I figured it was a relevant consideration.
edit: It’s kind of an odd list of foodstuffs the German study considered. The introduction implies but doesn’t state that they selected foods that they expected to have at least some aluminum content based on prior research. I also can’t account for the huge discrepancies between the French and German studies in terms of mg/kg aluminum levels detected.
That’s the point of the article: agriculture allowed the Earth to support a vastly larger human population than it could have otherwise, but at a cost.
Personally I’m more optimistic than the author of the article I linked that the median quality of life of a human on Planet Earth will ultimately exceed the median quality of life of a human on an Earth where agriculture had never been developed—in fact I think there’s a good chance that that’s already the case. But I don’t think it’s completely obvious, for reasons the author describes in detail.
I’m very sorry to hear about your dog. It’s a very difficult thing to go through even without any predisposition towards depression.
This is probably an idiosyncratic thing that only helps me, but I find remembering that time is a dimension just like space helps a little bit. In the little slice of time I inhabit, a pet or person who has passed on is gone. From a higher-dimensional perspective, they haven’t gone anywhere. If someone were to be capable of observing from a higher dimension, they could see the deceased just as I remember them in life. So in the same way that someone whose children are living far from home can remind themselves that their children are in another place, likewise your dog is living happily in another time. English doesn’t quite have a tense that conveys the sentiment I want to convey, but I think you get the idea. Don’t know if that line of thought does anything for you—I find it a small but useful comfort.
Re actually doing exercise/positive self-talk when you’re down, setting up little conditionals that I make into automatic habits by following them robotically has sometimes worked for me. “IF notice self getting anxious—THEN take five minute walk outside”. Obviously setting up those in the first place and following through on them the first n times only works when in an OK mood, but once they become habits they’re easier to follow through on in more difficult states of mind. I’ve also found the Negative Self-Talk/Positive Thinking table at the bottom of the page here to be useful.
But hard things are hard no matter what. Sounds like you’re doing the right thing now by making the most of the time you have together. Best of luck to you.
I keep a daily journal. Beginning of day: Two things that I’m grateful for. End of day: Two things that went well that day, two things that could have gone better. Each “thing” is usually only a sentence or few long. I find that going back through the end-of-day sentences every so often is useful for doing 80-20 analyses to find out what seems to be bringing me the most happiness / dissatisfaction (at least as judged by my end-of-day assessments).
I’ve taken the survey.
I find that negative visualization in conjunction with Mark Williams’ guided meditation “Exploring Difficulties” is useful for getting me in that stoic mindset of being more okay with a worst-case scenario. (Or at least, I hope so—I guess I’ll see how well it worked if the worst-case scenario ever comes to pass.)
One could bury Wikipedia, the Internet Archive, or a bunch of other items suggested by The Long Now Foundation
Since no one’s yet included the links to the Long Now Foundation’s blog posts in which they discuss suggestions for such items and other projects that are attempts in this direction, here they are:
http://blog.longnow.org/02010/04/06/manual-for-civilization/
http://blog.longnow.org/category/manual-for-civilization/manual-book-lists/
Networks of the Brain by Olaf Sporns certainly doesn’t cover all of computational neuroscience, but is a good accessible introduction to using the tools of network theory to gain a better understanding of brain function at many different levels.
I use Autohotkey on Windows for that purpose.
No; my script only contains the handful of unicode characters I commonly use, and is so idiosyncratic to me that it wouldn’t be of much use to anyone else (mine includes autoreplacements for directories, email addresses I commonly type, etc.). But it’s easy enough to make your own with whatever characters you use—the syntax is simply
::text-to-replace::desired-replacement
::alpha::α
::em::—
etc.
This seems like a slippery slope. Minorities tend to have shorter life expectancies than whites, at least in the U.S. and U.K. Do their votes then count for less?
Relevant: From OpenAI’s “Training Verifiers To Solve Math Word Problems”: “We also note that it is important to allow the model to generate the full natural language solution before outputting a final answer. If we instead finetune a 6B model to directly output the final answer without any intermediate steps, performance drops drastically from 20.6% to 5.2%.” Also the “exploration” linked in the post, as well as my own little exploration restricted to modulo operations on many-digit numbers (via step-by-step long division!), on which LMs do very poorly without generating intermediate steps. (But see also Hendryks et al: “We also experiment with using step-by-step solutions. We find that having models generate their own step-by-step solutions before producing an answer actually degrades accuracy. We qualitatively assess these generated solutions and find that while many steps remain illogical, they are often related to the question. Finally, we show that step-by-step solutions can still provide benefits today. We find that providing partial ground truth step-by-step solutions can improve performance, and that providing models with step-by-step solutions at training time also increases accuracy.”)
The number of experiences I’ve had of reading an abstract and later finding that the results provided extraordinarily poor evidence for the claims (or alternatively, extraordinarily good evidence—hard to predict what I will find if I haven’t read anything by the authors before...) makes this system suspect. This seems partially conceded in the fictive dialogue (“You don’t even have to dig into the methodology a lot”) - but it helps to look at it at least a little. I knew a senior academic whose system was as follows: read the abstract (to see if the topic of the paper is of any interest at all) but don’t believe any claims in it; then skim the methodology and results and update based on that. This makes a bit more sense to me.
Really, though, shouldn’t we be able to do something to protect the elderly or other vulnerable people without causing everyone else six months of financial hardship and lost relationships?”
″Six months...” the man squirms. “I might need you to do this for a year or two.”
Not exactly a fair description of what the public health measures have been. What country has been in lockdown for “a year or two” (besides China)?
> The harms caused by COVID suppression were larger than the harms of COVID itself for most people.
Possibly, but I doubt the same can be said for the net hedon loss. The great-uncle who died of COVID may have been quite old, but he still probably had a few years ahead of him: an expected 11 if he was 75, or 6 if he was 85. Those are years his family misses out on spending with him as well. The 10% of those infected who are still experiencing symptoms after 12 weeks (not depression: most frequently fatigue, cough, headache, loss of taste, loss of smell, myalgia), most of whom are likely to still be experiencing these issues for another 12 weeks or more, are not mentioned, nor is the impact of this on their own lives and livelihoods.
Most importantly, this really seems to strawman our poor bureaucrat, as he doesn’t even mention the actual point of these measures: to serve as a stopgap until herd immunity, ideally primarily by vaccination so as to mitigate the above harms + further harms caused by hospital overload. Meanwhile, vaccination is the primary thing that our actual public health bureaucrats have been hammering on for the past year. I get the feeling that this isn’t discussed in this post because it doesn’t fit the narrative.
(edited to add context to initial quote)
Thank you for such a comprehensive rundown! I’ve bookmarked this as I expect/hope to be in a situation in the future when this comes in handy.
I hate to say it, but the images are not coming through for me, as perhaps you’ve already noticed!
Given that the early data I’ve seen suggests that efficacy of 3 doses vs. omicron is similar to that of 2 doses vs. delta—probably a bit lower, but at least in the same universe—I’ve been using it largely as is, multiplying the final output by 2 to 3 based on what I’ve seen about the household transmission rate of Omicron relative to Delta. I know some other boosted people who have used it in a similar fashion. There’s so much uncertainty in the model assumptions that its best use in my view is to get very broad-strokes order-of-magnitude idea of the risk, which has been extremely useful for friends and relatives who have just wanted a baseline idea of whether the risk of getting COVID when participating in a particular activity is more like .01% or .1% or 1% or 10%. (Note: I doubt that said friends and relatives would have been able to use it in this way without my help, since it requires a little math and they’re not math types.) So I guess my main recommendations would be:
- don’t get rid of it even if you aren’t confident in the Omicron data—if you can produce results that are probably in the right order of magnitude, it’s still useful! If you aren’t up for a full Omicron overhaul, but you think there’s some back-of-the-envelope adjustment that could give results that are probably the correct order of magnitude, I think applying that—with suitable caveats about accuracy—would be preferable to taking the site down or leaving it as is.
- It’s easy to forget how many people are not math people whatsoever. Best practice in risk communication is often considered to be communicating numbers as percentages, as well as contextualized frequencies—not just ‘X-in-a-million’, but something like “X out of Y people (for context, Y is roughly the number of people living in Z)”—as there are a lot of people who don’t really understand percentages and need a little context to understand frequencies. In my ideal world the output would make the chance of getting COVID from this specific activity clear as a percentage and as a contextualized frequency, as well as the chance of getting COVID from this activity in a year under the assumption that you do this activity every N weeks, where N can be entered by the user.
UK’s ONS has a nice comparison with controls which shows a clear difference, see Fig 1. (Note that this release uses laboratory-confirmed COVID-19 only, unlike some of their other releases.)
The way that data set is presented is infuriating – there are tables that list raw counts without reference to the sample size (maybe it’s an estimated raw number for the whole country, in which case they’re quite small)
This is the UK Office for National Statistics—their usual is to report estimated numbers for the whole country. Easy to miss but it’s in thousands—scroll to the far right of each table with raw numbers and you’ll see that stated near the top. So Table 1 estimates 1,332,000 UK residents with Long COVID, which is in line with the 2% figure stated in Table 4 if we assume that it’s talking about the whole country.
I presume this is listing their health conditions before Covid since it makes no sense the other way, but am still somewhat confused.
Footnote 7 says “Health/disability status is self-reported by study participants rather than clinically diagnosed. From February 2021 study participants were asked to exclude any symptoms related to COVID-19 when reporting their health/disability status. However, in practice it may be difficult for some participants to separate long COVID symptoms from unrelated exacerbation of pre-existing conditions, so these results should be treated with caution.”
What’s even stranger is this is now people who had Covid over 12 weeks ago, instead of the general population, and the estimate has gone down – 2.06% to 1.46%.
The title of the table can be parsed different ways, but pretty sure that what this table is showing is, “Of people living in private households with self-reported long COVID, what proportion of them say that they first had COVID at least 12 weeks previously” (1.46%). We can see from Footnote 1 that the definition of Long Covid for this study was “Would you describe yourself as having ‘long COVID’, that is, you are still experiencing symptoms more than 4 weeks after you first had COVID-19, that are not explained by something else?” So presumably the remaining 98.54% of people with self-reported long COVID said that they first had COVID at least 4 weeks but less than 12 weeks previously.
The table with the 2.06% is saying, “Of all people in the country, what percentage of them have long COVID of any duration”, i.e., 4 weeks or longer. So I don’t think there’s a contradiction here.
Matt Bell was referencing the UK data set above so I have no idea how he can get 2.8%, and in fact my reading of the link says he has it somewhat lower than that but still strangely high.
I also tried and failed to figure out how he gets this number.
There is a separate study by the Office for National Statistics with controls (a later one than the one Matt Bell mentions, with different methodology) that I found useful—report is here—though annoyingly it doesn’t break the data down by individual symptoms. Figures 1 and 2 also illustrative with respect to duration of symptoms. The report is pretty comprehensive but the data tables are here, Tables 1-4 show comparisons to controls.
Bottom line is summarized by the points at the top, reproduced below; note that only “Approach 3″ uses self-reported long COVID:
”Approach 1: Prevalence of any symptom at a point in time after infection. Among study participants with COVID-19, 5.0% reported any of 12 common symptoms 12 to 16 weeks after infection; however, prevalence was 3.4% in a control group of participants without a positive test for COVID-19, demonstrating the relative commonness of these symptoms in the population at any given time.
Approach 2: Prevalence of continuous symptoms after infection. Among study participants with COVID-19, 3.0% experienced any of 12 common symptoms for a continuous period of at least 12 weeks from infection, compared with 0.5% in the control group; this estimate of 3.0% is based on a similar approach to the one we published in April 2021 (13.7%), but is substantially lower because of a combination of longer study follow-up time and updated statistical methodology. The corresponding prevalence estimate when considering only participants who were symptomatic at the acute phase of infection was 6.7%.
Approach 3: Prevalence of self-reported long COVID. An estimated 11.7% of study participants with COVID-19 would describe themselves as experiencing long COVID (based on self-classification rather than reporting one of the 12 common symptoms) 12 weeks after infection, and may therefore meet the clinical case definition of post-COVID-19 syndrome, falling to 7.5% when considering long COVID that resulted in limitation to day-to-day activities; these percentages increased to 17.7% and 11.8% respectively when considering only participants who were symptomatic at the acute phase of infection.”
I once TA’d a statistics class in which the chocolate/Nobel Prize thing was used as the prototypical example for why correlation doesn’t equal causation. Scientific American describes some problems with the study and plausible alternative explanations.
On the other hand, the cocoa may have some health benefits with respect to all-cause mortality, and the flavonoids likely have cognitive and other health benefits.
On the other other hand, the sugar surely doesn’t, and chocolate has a lot of aluminum and maybe lead—the latter is definitely not good for your brain, and the former might not be.
In any case, if you’re going to take it up for health reasons, a spoonful of unsweetened cocoa in your oatmeal or coffee every morning is probably better than a Cadbury’s egg.