The general way I’ve thought of the DA is that it’s probably correct reasoning, but it’s not the only relevant evidence. Even if DA gives us a billion-to-one prior against being in the first billionth of humanity, we could easily find strong enough evidence to overcome that prior. (cf https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common)
The general way I’ve thought of the DA is that it’s probably correct reasoning, but it’s not the only relevant evidence. Even if DA gives us a billion-to-one prior against being in the first billionth of humanity, we could easily find strong enough evidence to overcome that prior. (cf https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common)
What it could be? Alien supercivilization?
It mostly comes down to outlook on x-risk. If we align an AI, then we’re probably good for the future.
But chances of the alignment are small? Like less than 10 per cent? Also, AI can solve DA once and for all.
I don’t particularly think that we currently have strong evidence against the doomsday argument being accurate.