Economist.
Sherrinford
[Question] Which intro-to-AI-risk text would you recommend to...
Well, to be clear, I am not at all an expert on AI alignment—my impression from reading about the topic is that I find reasons for the impossibility of alignment agreeable while I did not yet find any test telling me why alignment should be easy. But maybe I’ll find that in your sequence, once that it consists of more posts?
Okay, but does the Utopia option rest on more than a vague hope that alignment is possible? Is there something like an understandable (for non-experts) description of how to get there?
“My personal rough guess would be 25% x-risk conditional on making AGI, and median AGI by 2040, which sharply increase the probability of death from AI to well above natural causes.”
Could you please link to any plausible depiction of what the other 75% look like? I am always a bit puzzled by this and would like to know more.
Interesting perspective, and a bit disappointing.
What are the properties that make substack so successful? At first glance, substack blogs seem less structured than e.g. wordpress.com. In Substack, the “Archive” of a blog is just a long list. Distributing new articles via email does not seem like a spectacular feature, but in any case it should be possible on other blog platforms as well. What am I missing?
So it seems this is an argument you would endorse. If so, would you add some numbers for the costs of vaccination vs non-vaccination?
How does your theory explain cross-country differences in vaccination rates?
“It doesn’t feel hard for me to understand.”
I don’t see how your explanations relate to the explanations of the people who gave their answers to this (small) survey. So you have a thesis and some anecdotes and personal impressions, but how do you justify the certainty?
Also,
“The establishment lost a lot of credibility for saying that it was okay to demonstrate unmasked in the BLM protests right after lookdown and then afterward telling people that wearing a mask outdoors is very important. ”
Who is this “establishment”, seemingly speaking with one voice and understood by “those people” to be a coherent unit? Do you suggest those people do not get vaccinated because former potus Trump, who is proud of having been responsible for fast vaccine development afaik, eroded trust in what institutions say by lying frequently?
Purely subjective personal questions are questions where others cannot check reliably whether you resolved in an “unfair” way. So reputation also does not work, at least it takes a lot of time.
I edited the text of my first comment, using the words from Daniel’s comment. Maybe it’s easier to understand now.
Whether reputation works may depend on the questions asked. Suppose I ask whether I will enjoy my trip to Miami, a question that may attract people who don’t even know me but have been there and zhe outcome of which cannot be verified. If I can resolve such questions in a way that [edited:] allows me to cash in with my alt accounts, it will take a long time until people can get their suspicions probabilistically confirmed.
Was the thought that the incentives related to ownership of the product are a reason for franchising surprising to you? What was your previous idea of why there is franchising?
“There are three theories I know about for why big corporations pay more.”
Note that all three theories, not only the second one, require that larger firms are more productive and make more money per employee.
“If working at a major corporation is a major life cost, and working in management a bigger one, and these come with higher pay, than a lot of income inequality in developed countries does not represent a gap in desired life outcomes, and it might be more unfair if that part of the gap was closed.”
If large corporations are so bad that people should be discouraged from working there, closing the gap by taxing higher incomes would still be good, as it discourages rent-seeking and rat races.
Suggestion for an alternative model, simpler than gerrymandering: prob(punishment) depends positively on the severity of a norm violation, but there is no threshold where it becomes 1 or 0. Even though you draw a two-dimensional diagram, You model seems to have only one dimension, and so there is some randomness capturing the things left out of it.
To determine which kind of voting makes sense, you should decide whether you want to have agreement to many posts with weak consensus or whether you want to reward polarizing posts. In the former kind of system, “the incentive to downvote everything you don’t like to −1” is not a bug, it’s a feature. I’d also prefer a system with 3 extra buttons to an asymmetric one.
I’m glad that you like the draft! I’d like to point out two things, however:
You already did evaluate the political content of the post by curating it. To any outside visitor to this site, from curious people lost in hyperspace to journalists or scientists, the stance that most governments are “Lying Liars With No Ability To Plan or Physically Reason”, that “we” are at “war” against WHO, CDC and FDA will be the political line of LessWrong, with all that this implies, in particular because you made an exception from curation criteria.
A curation is (also) intended to make sure that the curated post will continue to get traffic.
Thank you for your reply, Ruby.
What would being explicit and upfront about this category of curated content look like?
To me it seems like that would require something like a disclaimer box at the top of the post:
“Note: Lesswrong usually curates posts that embody the virtue of scholarship. This implies balanced, fact-based arguments in which the authors make their line of reasoning transparent, understandable and open to discussion. It excludes referring to the author’s authority as a substitute for an argument. It avoids the use of unnecessarily aggressive rhetoric, in particular based on false statements. This is particularly important in the context of politics discussions, not because these discussions need different rules of analysis on a theoretical level, but because experience suggests that the discussion of politics may be prone to inducing behavior like the disregard of rules of discussion and truth-seeking for only one side of the debate. It is important for LessWrong not to cultivate bias. However, for the present post the mods make an explicit exception and curate it because they want to increase its visibility. They think it is the best summary advice content available on the topic of covid-19. Even though the advice is not verifiable based on the post alone, the mods either believe its statements to be true because they read other texts by the same author that they found convincing, or because they trust the author for other reasons. Moreover, the mods do not endorse the political claims and the obviously false generalizations made in the post.”
This would obviously seem strange, but it is my impression of the reactions to discussions under these posts.
That’s a misleading rephrase. The author that they have detailed their sources and reasoning extensively elsewhere in their own other writing, which I’ll add isn’t hard to find if you just click on the author’s profile. This post doesn’t repeat the reasoning and sources since it’s more of a summary post.
So effectively, you say: These are not just claims, but you have to search for sources and other justifications somewhere in the author’s writings. This puts the burden completely on people who would dispute the claims or are skeptical about them. However, in his other writing, the author also makes several claims that are just claims without sources, in particular when they are claims about what some perceived other people (?) / “everyone” / the media (?) / “we” / [I can’t always say who he is referring to] thinks, says or does:
“Naturally, the public-facing articles all seem to quote the 83%, and ignore the 95% and 99%.”, “because again everyone is on the ‘make the vaccines look unsafe’ team”, “The second we is also everyone collectively, inside the belief system of those who hold this religious model, which I think is roughly half the country”. There are also other misleading or exaggerated claims like “Certainly our vaccine policy has given little or no thought to getting doses for the third world”. Asking for sources or explanations of claims leads to non-answers.
And no, I do not claim that I read all the posts or that I am representing all of Zvi’s posts or all of his answers to comments here. I read several of them, found that they contain useful assessments of the situation along with claims without sources, misrepresentations and rhetorics, and gave up on reading the rest because all of this makes it impossible to say what is true and what isn’t.
Do you mean it’s disconcerting that this post was curated, or that the contents of the post are more broadly disconcerting just for appearing on LessWrong?
The former.
With respect to a the original “my current model” post, someone who was enthusiastic about the content suggested that you need to
have good context for ~all the high-level generalizations and institutional criticisms Zvi is bringing in, and why one might hold such views, from reading previous Zvi-posts, reading lots of discussion of COVID-19 over the last few months, and generally being exposed to lots of rationalist and tech-contrarian-libertarian arguments over the years, such that it doesn’t feel super confusing or novel as a package
This possibly also applies here. And that is strange for a showpiece text; it basically signals that exemplary posts are those that are immune to criticism because of the authority of the writer and because others know that the writer is right. Additionally, I do not see how the pitchfork rhetoric is justified; but I assume that at some degree of being an insider of the ‘rationalist community’ you just think that this normal or justified (that is just my impression of course).
This post explicitly says that its aim is not to explain what it states. Instead, the author says that people can check sources etc “elsewhere”. Among the large number of claims and “principles” are, effectively, a call to “war” against US and international institutions, and a nonsensical claim about “governments most places”. And when curating the post, you tell people to “check claims for themselves”. We have discussed these or similar points with respect to previous covid-19 posts, so these norms on lesswrong are not surprising anymore, but they are disconcerting.
This post and the comments are a very interesting read. There is one thing that I find confusing, however. My impression is that in the text and the comments, children are only discussed as means to fulfilling the parents’ “reproductive goals” and traded off against the opportunity cost of saving humanity (though it is also discussed that this dichotomy is false because by saving humanity you also save your children). Probably I am overlooking something, but what I don’t see is a mention whether the expectations of AI timelines, to the extent that you cannot influence them, affect (or should affect) peoples’ decisions of having children. A relevant number of people seem to expect AGI in something like 8 years and low probability of alignment. I am a bit confused about the “animal” arguments, but it sounds a bit like saying “Okay even if you believe the world will end in 8 years, but you are in the age span where your hormones tell you that you want children, you should do that”. As somehow who is just an interested (and worried) reader with regards to this topic, I wonder whether people in AI alignment just postpone or give up on having children because they expect disaster.