Economist.
Sherrinford
I appreciate that you posted a response to my question. However, I assume there is some misunderstanding here.
Zvi notes that he will not “be engaging with any of the arguments against this, of any quality” (which suggests that there are also good or relevant arguments). Zvi includes the statement that “AI is going to kill everyone”, and notes that he “strongly disagrees”.
As I asked for “arguments related to or a more detailed discussion” of these issues, you mention some people you call “random idiots” and state that their arguments are “batshit insane”. It thus seems like a waste of time trying to find arguments relevant to my question based on these keywords.
So I wonder: was your answer actually meant to be helpful?
So you think that looking up “random idiots” helps me find “arguments related to or a more detailed discussion about this disagreement”?
In Fertility Rate Roundup #1, Zvi wrote
“This post assumes the perspective that more people having more children is good, actually. I will not be engaging with any of the arguments against this, of any quality, whether they be ‘AI or climate change is going to kill everyone’ or ‘people are bad actually,’ other than to state here that I strongly disagree.”
Does anyone of you have an idea where I can find arguments related to or a more detailed discussion about this disagreement (with respect to AI or maybe other global catastrophic risks; this is not a question about how bad climate change is)?
Expecting that, how do you prepare?
It is an interesting question how justified this stereotype is, given that many regulations aim at creating a single market and reducing trade barriers.
Comparing EU growth to the US is hard for different reasons, for instance demography but also the decarbonization efforts of the EU.
I know the internal European discourse, which is why I think depicting politicians in Europe as being mostly impervious to “pro-growth ideas” seems like a strawman. It is mainstream in the EU to try to find ways for higher economic growth rates. Everybody is talking about deregulation, but there are very different ideas what kind of policies would lead to higher growth rates.
are not completely impervious to pro-growth ideas
Depicting “eurocrats” as mostly impervious to “pro-growth ideas” seems like a strawman.
This stuff is scary: I’ve seen degrowthers
It is unclear how strongly related such degrowthers are to the beyond-growth conference people used as an example in the previous sentence.
European parliament even hosted a degrowth conference.
The linked abstract does not contain the word “degrowth”. The title is “Beyond growth: Pathways towards sustainable prosperity in the EU”, the abstract is relatively unclear but—among other things—seems to criticize GDP as a measure, and talk positively of “research and innovation”. The executive summary of the study that can be found there seems to talk positively of delivering “greener and more sustainable growth through technological or social innovations” and of “decoupling of economic growth from increased emissions of carbon dioxide”. So in general, this seems to be about limiting the growth of the usage of natural resources in order to stay within sustainable levels.
Europe has become known as a hub of degrowth.
It is unclear what this claims is supposed to mean. The characters “europ” do not appear in the Conclusions of the linked article. It is not clear what the fact that some authors of papers covering “degrowth” come from Europe, whatever that means in the specific paper, is supposed to prove.
In the last weeks, I saw some posts or comments arguing why it would be in the self-interest of an extremely powerful AI to leave some power or habitat or whatever to humans. This seems to try to be an answer to the briader question “why should AI dobthings that we want even though we are powerless?” But it skips the complicqted question “What do we actually want an AI to do?” If we can answer that second question, then maybe the whole “please don’t do things that we really do not want” quest becomes easier to solve.
Right; my point was just that the hypothetical superintelligence does not need to trade with humans if it can force them; therefore trade-related arguments are not relevant. However, it is of course likely that such a superintelligence would neither want to trade nor care enough about the production of humans to force them to do anything.
I just wanted to add that I proposed this because many other possible terms (like “smooth”) might have positive connotations.
With respect to the horses, I did not check Eliezer’s claim. However, the exact numbers of the horse population do not really seem to matter for Eliezer’s point or for mine. The same is true for the rebound of the Native American population.
exponential / explosive
Thanks for helping. In the end, I deleted the post and started from scratch and then it worked.
Sorry, but where/how would I do that?
When I write a post and select text, a menu appears where I can select text appearance properties etc. However, in my latest post, this menu does not appear when I edit the post and select text. Any idea why that could be the case?
That would be great, but maybe it is covered much more in your bubble than in large newspapers etc? Moreover, if this is covered like the OpenAI-internal fight last year, the typical news outlet comment will be: “crazy sci-fi cult paranoid people are making noise about this totally sensible change in the institutional structure of this very productive firm!”
Does this question require that there is only one big filter per species?