We really need the update! I was going to share this with someone who just now has been hit with the full emotional force of realizing what’s going on with AI… but even this right at the beginning doesn’t seem so applicable anymore: > Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.
Maybe still plausible but imo less likely at this point than even “all the politicians and world leaders are going to wake up and are going to implement a halfway sensible solution”.
We really need the update! I was going to share this with someone who just now has been hit with the full emotional force of realizing what’s going on with AI… but even this right at the beginning doesn’t seem so applicable anymore:
> Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.
Maybe still plausible but imo less likely at this point than even “all the politicians and world leaders are going to wake up and are going to implement a halfway sensible solution”.