meow
hollowing
What does “relevant models” mean?
Even if we assume the human species is typical, it doesn’t follow that current Capitalist civilization, with all its misincentives (the ones we’re seeing drive the development if AI), is typical. And there’s no reason to assume this economic system would be shared by a society elsewhere.
“do you really want to give up the one shot we have at making a better world for biological life?” is a misleading argument because, as you know, humanity may well not create an AGI that makes the world better for life (biological or otherwise).
“it is exceedingly unlikely that we will destroy life on earth” is a valid objection if true though.
Yes, I only have what I consider to be educated suspicion about where current human civilization might fall in the range of possible civilizations. However, in terms of felicific calculus (https://en.wikipedia.org/wiki/Felicific_calculus), weak evidence is still valid. If it is all we have to go by, we should still go by it, especially considering the graveness of the potential consequences. Lack of strong evidence is not an argument for the status quo; this would be an example of status quo bias (https://en.wikipedia.org/wiki/Status_quo_bias).
Your second line is an emotional appeal.
“Making existential choices on such a basis is always a bad idea. What is needed is better information” Regardless of the choice you make, the choice is being made with weak data. Although strong data is the ideal, going with a choice weak data suggests against is worse than going with the choice it favors. Of course, if there is a way to get better information, we should do that first if we have time.
”Would you commit suicide if you thought that it was 60% likely that your life would be of negative value?” Not necessarily. However, if I exhausted all potential better alternatives like investigating further, then in principle yes as I’m a utilitarian. That said, this question has a false premise; I control the impacts of my life, and can make them positive. Not so with civilization.
What are the most cost effective alignment organizations to donate to? I’m aware of MIRI and https://futureoflife.org/ .
i see, well i’m not sure what to do then. i inherited a lot of money and i wanna give most of it to alignment groups
I’m new to alignment (been casually reading for a couple months). I’m drawn to the topic by long-termist arguments. I’m a moral utilitarian so it seems highly important to me. However I have a feeling I misunderstood your post. Is this the kind of motive/draw you meant?
edit: reposted this comment as a ‘question’ here https://www.lesswrong.com/posts/eQqk4X8HpcYyjYhP6/could-ai-be-used-to-engineer-a-sociopolitical-situation
[Question] Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI?
[Question] What moral systems (e.g utilitarianism) are common among LessWrong users?
[Question] (Cryonics) can I be frozen before being near-death?
What kind of professional could I discuss this with?
No, I think the same argument could apply to the extinction of humans only, it just seemed less plausible to me that this would happen compared to all life on earth being wiped out.
In fact, I have doubts about whether it might be possible to steer AGI in a direction which ends life on earth but does not radically transform the rest of the reachable universe too. But if it is possible, this would be a potential argument for it.