Economist.
Sherrinford
I think even that signature tagline version does not work so well, as people who do not know it would possibly not understand that you are referring to a specific organization. It would at least need to be
“Anna from
CFAR—a center for …”
Sorry if I was not clear enough but what you write is what I meant as well.
I agree that our metabolism is adapted to eating a mixed diet, but that mostly means that you should not blindly delete animal products from your diet. It is theoretically possible that you can replace animal products with other things, given that we live in a technologically different society. Of course you can say “we do not know what to replace on the micro level”, or make Chesterton’s Fence arguments, but then it is a bit unclear to what kind of diet we should “return”. You can make the adapted-metabolism argument about any selective diet. Maybe we have to eat Offal because our ancestors did, or eat chicken soup because my aunt did that, because these things contain very important things we do not fully understand. Or maybe our ancestors had to eat these things because they were efficient ways to get protein and fat into their bodies, and we consume enough of that already and too much of the bad things we do not fully understand that they also contain. So the adaptation argument alone is not enough.
Not to mention that of all of the hunter gatherer tribes ever studied, there has never been a single vegetarian group discovered. Not. A. Single. One.
I think this does not prove as much as the “Not. A. Single. One.” part seems to try to hammer home to the reader. It merely shows that people that evolve under conditions of scarcity and extremely low technology do not get a strong evolutionary benefit from excluding animals from their diet. But do vegans in general assume the opposite? Additionally, India might be a relevant case study here, because vegetarianism seems to have been common there for a long time.
There’s the sniff test. A large percentage male vegan influencers look pale and sickly. (I’m not going to name names, but if you follow the space at all, you’ll know who I’m talking about, because it could refer to so very many of them.) Of course, you can build muscle and be fit as a vegan, but it is much harder, and we know that muscle mass is a significant predictor of all sorts of positive health outcomes.
I do not “follow the space” of male vegan influencers, so I cannot judge it. However, I would like to ask for a comment on what vegan strongman Patrik Baboumian, “face of a campaign by the animal rights organization PETA”, said in an interview (Google translation):
He had long since become a figurehead of the vegetarian movement. He felt the pressure: “I was afraid of failure if I also eliminated dairy products. They were the most important source of protein for me as a vegetarian strength athlete.”
But things turned out differently. As a vegan, Baboumian suddenly had to eat even less, he says, “because my metabolism became more efficient.” Animal protein acidifies the metabolism because it is rich in sulfur-containing amino acids, “and when these are metabolized, a lot of acid is produced.”
With plant proteins, things were different: “My acid-base balance was suddenly balanced, and that had a very positive effect. For example, all the inflammatory processes that automatically arise during strenuous exercise healed much better. So I was able to train more effectively with less protein and fewer calories.”
this regime seems less effectively restrictive of practical freedoms than, for example, the current regime in the United Kingdom under the Online Safety Act. They literally want you see ID before you can access the settings on your home computer Nvidia GPU. Or Wikipedia.
This seems to be a strong claim considering that it is not supported by sources or explanations. ChatGPT says that the claim as stated is not true “as far as what the law actually requires. But there are partial truths and concerning ambiguities which make it reasonable people are worrying.” (Long ChatGPT version here. I’d be grateful for corrections.)
I asked ChatGPT whether the claim “under the uk Online Safety Act, they literally want you see ID before you can access the settings on your home computer Nvidia GPU or Wikipedia” is true- (In the original claim that I wanted to verify, it was “computer Nvidia GPU. Or Wikipedia.” I put that together to make a whole sentence of it.) It answered that the claim as stated is not true “as far as what the law actually requires. But there are partial truths and concerning ambiguities which make it reasonable people are worrying.” Below, I quote the detail part of the answer. I did not verify anything of that answer.
What the UK Online Safety Act does say
Here are some important parts:
The Online Safety Act 2023 creates duties for online platforms (websites, social media, services that let users post content, etc.) to protect users — particularly children — from illegal content or harmful material. Wikipedia+2GOV.UK+2
Ofcom, the UK regulator, is empowered to designate certain services as “Category 1 services” (among others). These are large/risky platforms under the law’s framework. GOV.UK+2Wikimedia Foundation+2
If a platform is designated Category 1, it may have to put in place more stringent safety requirements. One of those could include verifying the identities of contributors or users in some contexts. For example, Wikipedia has contested that some duties may require that many volunteer editors be identity-verified. Wikimedia Foundation+2Ars Technica+2
The law also includes requirements for platforms to use “robust” age verification for content that is age-restricted (for example adult content) and to prevent children from accessing harmful material. PC Gamer+3GOV.UK+3
What seems to be fueling the confusion
Several sources of misinterpretation or exaggeration:
Category 1 worries — Because Category 1 services could end up with strong duties, including possibly identity verification of contributors/editors, people are concerned that sites like Wikipedia may be forced to require IDs. That concern is real. Wikipedia+3Wikimedia Foundation+3Ars Technica+3
Age verification for restricted content — Law requires platforms serving adult content or content harmful to minors to verify that users are over a given age. That sometimes involves photo ID, facial estimates, etc. But that’s for content access, not for hardware settings or personal tools. GOV.UK+2Tom’s Guide+2
Misinformation / mis-wording — Some social media posts have exaggerated (“you’ll need ID for everything”) or conflated “platform content moderation settings” with “hardware/software settings.” E.g. someone claimed that teens under 18 can’t access Nvidia’s GPU control panel unless identity-verified. I did not find credible source backing that particular example. X (formerly Twitter)+2
Conclusion: Is the claim true?
No: the law does not impose showing ID for configuring or accessing hardware/software settings on your personal home computer, such as Nvidia GPU control panel.
It might impose identity verification under certain conditions for online services, especially if they are designated “Category 1” and have user-generated content, or serve age-filtered content. But even then, the law is not yet fully implemented in many respects, and any identity-verification requirement would have to be legally justified and proportionate.
So the claim is false in the literal sense, but has a kernel of truth (concerning online services and identity verification duties) that can lead to confusion.
I do not think that such a theoretically possible effort is comparable to site moderators summarizing and publishing the information in an argument.
This is another comment where I do not understand the downvoting.
Why do people downvote such a comment, exactly?
I confirm that my understanding of top author was close to what Said describes here.
I am surprised that user data is analyzed that way, and then also that it is published here when someone has left or declared intention to do so.
Heaving read the post “Does Trump’s AI Action plan have what it takes to win?” by Peter Wildeford, I realize that I do not understand what the word “winning” means here. I searched the Whitehouse document for the word and found it almost exclusively in the introduction. What is that race? What does it mean to win it? What happens next?
The reference to the space race in the introduction does not help (“Just like we won the space race, it is imperative that the United States and its allies win this race.”). Acoording to Wikipedia, the Soviets “achieved the first successful satellite launch, Sputnik 1, on October 4, 1957. It gained momentum when the USSR sent the first human, Yuri Gagarin, into space with the orbital flight of Vostok 1 on April 12, 1961. These were followed by a string of other firsts achieved by the Soviets over the next few years.” Then the US were the first country to land someone onthe moon. So they won the moon race, but that did not mean that the space race ended decisively. There were other space “firsts”, and being first was mostly symbolic. Maybe there are better comparisons? In the case of nuclear weapons, being first to build them was important, but making that an end point to other countries’ nuclear programmes would have required very unscrupulous behavior; therefore the “race” was conditional on the war against the Axis, or maybe even conditional on the war against the Nazis. The race was mainly ended by winning the war.
So what does it mean to win the AI race? Peter Wildeford writes: “I do expect some geopolitical ‘winner takes all’ or ‘winner takes most’ dynamics to achieving AGI, so in that sense the racing is very accurate. Whoever has a lead in developing AGI will have a significant say in shaping the post-AGI society, and it’s important for that to be shaped with freedom and American values, as opposed to authoritarianism.” What does it mean to “have a significant say in shaping the post-AGI society”? Is it like being the first country to have a nuclear bomb and then ending other countries’ efforts? Or Is it like being the first country to have a nuclear bomb and then not doing that? Or is it like being the country that has Apple and Meta and Alphabet and Microsoft? What does this “significant say” mean, concretely?
PW writes that “1. The Plan shows refreshing optimism” because “Historically, scientific progress has brought much wealth and opportunity to all of humanity. If AI becomes capable of automating this scientific progress and innovating across many domains, it is genuinely plausible we could enter into a true Golden Age. If done right, this would create a world where everyone is fully free and empowered to self-determine and self-actuate, without any barriers to living the lives they want to live.” I do not see the plan’s recipe for that, though maybe I am just overlooking it. How does this work if “3. The Plan acknowledges AI’s transformative potential but not its unique challenges” and “The problem is that the Plan focuses solely on the familiar risks from AI and ignores far more pressing future AGI problems.”? In context of the whole post, the section under Heading 8, “8. Retraining might not be enough to handle AGI-driven disemployment” reads as though PW sees a severe risk of social catastrophe and at the same time as though he thinks we should think about that somewhat more while not letting it reduce our optimism. All in all, the post seems like “let’s make sure we can win this race by really speeding up a lot! And then maybe we should also think a bit whether we are moving in the right direction.”
As a side note, with respect to the renewable-energy part, I don’t understand why pointing out that climate change is an important problem should be called a “crusade for climate change awareness”.
When someone makes a list of claims and some of the words are clickable, I expect the link to lead to some evidence for the claim, or at least a very clear example if the claim is based on common knowledge. Instead, the claim “Europe’s war against air conditioning continues to be truly absurd.” does not lead to anything that would illustrate a “war” (not even a metaphorical war) against air conditioning, it does not show anything about “Europe’s” current policy at all but instead just leads to a tweet by Rob Wiblin who says “European countries with hot summers should have AC in most buildings, and we should install solar panels that supply the necessary electricity just fine on hot and sunny days. It’s crazy AC is uncommon in the UK — doubly so in France.” (It is unclear what is controversial in the first sentence. The second sentence may be based on an accurate description of the situation, but it is unclear in the tweet.), who retweets a statement by French nationalist M. Le Pen about air conditioning. That statemen is a mix of some policy intention and claims about French “leaders” and “elites”.
I think the old norms according to which politics content was seen as potentially problematic and therefore should at least be based on good epistemics had their advantages. But maybe I misremember those times.
I won’t discuss tpoasiwid here, but I note that your claim is completely different from alleging that (1) there is a cult of pain that (2) is rooted in ethics that developed in malthusian times and (3) now drives policy choices. If everything that is relevant is tpoasiwid, then we do not need to claim anything about motivations driving policies.
Thanks. The French example sounds like a regulatory definitions problem? I do not know the motivation for the Geneva one. I do not see how this substantiates the cultural scepticism point, and there seem to be many explanations that are more likely than a “cult of pain”. Your point about Zurich demonstrates that innovations and changes in buildings are often complex due to institutions, laws and market environments.
If a “cult of pain” or a positive attitude towards suffering was the driver behind European policies, I would expect to see policy documents approving e.g. of death during heatwaves. Instead, EU documents usually emphasize this as a severe problem and a motivation to promote climate adaptation policy (see e.g. this one by the EEA).
I agree that thinking about positive-sum situations as zero-sum is bad, but one should be cautious about assuming other people’s motivations. You make the strong claim that the policies that you list as examples are motivated by a cult of pain that developed due to a moral heuristic that developed during Malthusian times. This seems strange because there are more recent developments that should have a stronger, or at least equal impression on moral intuitions, like the suffering during the industrial revolution, or carbon emissions and climate change. The “cult of pain” explanation does not seem like a straightforward explanation for what you see as irrational collective/societal behavior.
Your question about “Germans silently suffering in their overheated apartments with no air conditioning” seems to be why they have no AC units. Possible answers are: because of the typical problems in housing markets, because of imperfect regulation, because of high electricity prices, because heat waves were perceived as less of a problem a while ago. Who said he or she does not own an AC unit in order to do “repentance for the carbon footprint of their holiday in Turkey the other year”?
Of course there are people who “believe in degrowth”, but it is not a dominant attitude. The European Commission, for example, framed the European Green Deal as a “growth strategy that protects the climate”.
Would you please provide some references for these claims? For Germany, my assessment is the following:
The permit requirements do not seem to be against AC in particular (perplexity link), but arise from all kinds of reasons like monument protection. You may find this annoying, excessive or wrong, but if some people have a preference for conserving old buildings, that is certainly different from a “cult of pain”.
As part of policies to increase energy efficiency, you may get subsidies for installing an AC unit (depending on the use case), here is a website by Bosch explaining the cases.
Side note: In Germany, electricity is expensive; however, you can use your rooftop photovoltaics electricity for your electricity consumption including AC (which is cheaper than electricity from the grid and often coincides with times of high temperatures).
Which consumer advice speaks against AC? The Verbraucherzentrale (German “consumer advice centers”, associations that provide advisory services under a government mandate.) gives advices on what to take into account when buying an AC. They add a cost-benefit advice by noting that a fan can be much cheaper, due to high electricity prices (here, here).
I don’t see how the “Cultural scepticism” point could be verifiable, and in particular how to distinguish it from a lack of knowledge about AC units.
Thank you for writing this statement on communication strategy and also for writing this book. Even without knowing the specific content in detail, I consider such a book to be very important.
Some time ago, it seemed to me that relevant parts of the AI risk community were ignoring the need for many other people to understand their concerns, instead thinking they could “simply” create a superintelligent and quasi-omnipotent AI that would then save the world before someone else invents paperclip maximizers. This seemed to presuppose a specific worldview that I didn’t think was very likely (one in which political action is unnecessary, while technical AI safety still has a reasonably good chance of success). (I asked the forum whether there were good intro texts for specific target groups to convince them of the relevance of AI risk but the only answer I received made me search somewhere else.) However, there is good outreach and a lot of policy work being done, and the discussion of communication strategies and policy strategies seems extremely necessary.
One of your arguments is that you have a “whole spiel about how it’s possible to speak on these issues with a voice of authority”, referring to the Nobel laureates etc who warn against AI risk, and “if someone is dismissive, you can be like “What do you think you know that the Nobel laureates and the lab heads and the most cited researchers don’t? Where do you get your confidence?”” With respect to the Californian law proposal, you write: “If people really believed that everyone was gonna die from this stuff, why would they be putting forth a bill that asks for annual reporting requirements? Why, that’d practically be fishy. People can often tell when you’re being fishy.” Sometimes, however, it seems suspicious when people appeal to authorities when asked to explain their ideas. Referring to Nobel laureates can be an introduction to your argument or you can refer to them later on, but to be convincing, you need to be able to actually explain the issue. Of course, you can use the authority argument in a supportive way, but that will not be enough, also because policymakers and everybody interested in policy debates receive contradictory claims about AI all the time.
Acting and talking “as if it’s an obvious serious threat” may be helpful to signal your seriousness. However, a very strong way of signaling that you are convinced of an issue is glueing yourself to a street or starting a hunger strike. However, though it’s hard to say what a counterfactual world would look like, it seems that these actions did not meaningfully increase support for climate policy. It is hard to say how minds change (so people write whole books about it. But it seems that the effectiveness of a signal strongly depends on context and how much people have been prepared to what you are saying afterwards. (I assume that is why EY wrote the Sequences.)
If your discussion partner is already convinced of the relevance of the topic, then of course you should not be like “Sorry if I am even bothering you about such nonsense”. By contrast, if the audience perceives your asks to be too radical relative to their prior, they may be deterred. You may then not even have the audience’s attention to explain. In particular, it seems to me that talking about certain radical ways to stop AI risks, which has happened in the past, might be actively dangerous and offputting. Of course, demanding extremely radical action means that the audience realizes that you are very convinced of what you say. Yet they may also think you are a crank because serious people would not do that. You at least need to be have enough time to make your your point about the Nobel Laureates.
Yes, the Overton window may have shifted and be shifting, but sometimes the Overton window shifts back again. In 2019, Greta Thunberg’s “How dare you” speech was possible (and had impact); nowadays it is not. This is possibly why “Most elected officials declined to comment” on your book and only “gave private praise”.
“If people really believed that everyone was gonna die from this stuff, why would they be putting forth a bill that asks for annual reporting requirements?” This is another parallel to climate activists. If you are really serious about climate change, wouldn’t you demand stopping most carbon emissions instead of agreeing that subsidies are paid to solar power? Maybe. But you can demand one thing and also support the other one. Some people or political groups will not agree that issue X is important and they will not agree to radical action against X, but still be okay with some weak action against X; that seems like normal politics.
Having too many links may be confusing, but some more may be better than just Amazon.
I really welcome the announcement that CFAR is restarting. When I attended a workshop, I liked the participants, the lecturers, the atmosphere, and the impact of committing time to work on problems that participants had previously procrastinated. That said, a bunch of thoughts and questions:
I am not sure whether there is really some specific “rationality magic” about these workshops. The CFAR technique collection contains cool techniques, but it does not really feel that different from what you might do in time-management/micro-habits/GTD/whatever workshop combined with some things that seem like group coaching, psychological process consulting or things that at least feel a little woo.
There might be a specific group dynamic going on in these workshops that has to do with the commitment atmosphere, self-expectations, selection effects, the payment of $ 5000. This may get some people to become productive or whatever, but I assume it can also be unhealthy to others (note that not all unhealthy developments are on the level of psychosis or mania or whatever).
I attended a free workshop in Prague in 2022. So maybe some of the effects were different there. Nonetheless, I would like to know what insights you generated with those workshops (assuming that that was evaluated systematically). I think they were held for generating data.
It seems positive that “circling” is not mentioned as a “CFAR classic”.