1. AGI is happening soon. Significant probability of it happening in less than 5 years.
[Snip]
We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to take it down.
Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are.
“AGI” here is undefined, and so is “significant probability”. When I see declarations in this format, I downgrade my view of the epistemics involved. Reading stuff like this makes me fantasize about not-yet-invented trading instruments, without the counterparty risk of social betting, and getting your money.
Strong upvoted because I doubt the implications will be adequately appreciated in all of LW/EA. Some cause ideas are astronomically noisy. Sometimes almost deliberately, in the service of “finding” the “highest potential” areas.
Above some (unknown) point, the odds they’re somehow confused/exaggerating should rise faster than further increments up the ostensible value. I’m sure they’ll claim to have sufficiently strong insight, to pull it back to the top of the EV curve. This doesn’t seem credible, even though I expect the optimal effort into those causes is >0, and even though their individual arguments are often hard to argue against.