So, we will have nice, specific things like Prevention of Alzheimer’s, or some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Also, we will need to have some conversation because the human economy will be obsolete and incentives for states to care about people will be obsolete.
I feel like the fundamental problem with this is that while scientific and technological progress can be advanced intentionally, I can’t think of an actual example of large scale social change happening in some kind of planned way. Yes, the thoughts of philosophers and economists have some influence on it, but it almost never takes the shape of whatever they originally envisioned. I don’t think Karl Marx would have been super happy with the USSR. And very often the causal arrows goes the other way around—philosophers and economists express and give shape to a sentiment that already exists formless in the zeitgeist, due to various circumstances changing and thus causing a corresponding cultural shift. There is a feedback loop there, but generally speaking, the idea that we can even have intentional “conversations” about these things and somehow steer them very meaningfully seems more wishful thinking than reality to me.
It generally goes that Scientist Invents Thing, unleashes it into the world, and then everything inevitably and chaotically slides towards the natural equilibrium point of the new regime.
I broadly agree with this, though I’ll state 2 things:
Limited steering ability doesn’t equal 0 steering ability, and while there’s an argument to be made that people overestimate how much you can do with pure social engineering, I do still think there can be multiple equilibrium points, even if a lot of what happens is ultimately controlled by incentives.
AIs probably have a much easier time coordinating on what to do, and importantly can route around a lot of the bottlenecks that exist in human societies solely due to copying, merging and scaling, so assuming alignment is achieved, it’s very possible for single humans to do large scale social change by controlling the economy and military, and working your way from there.
I feel like the fundamental problem with this is that while scientific and technological progress can be advanced intentionally, I can’t think of an actual example of large scale social change happening in some kind of planned way. Yes, the thoughts of philosophers and economists have some influence on it, but it almost never takes the shape of whatever they originally envisioned. I don’t think Karl Marx would have been super happy with the USSR. And very often the causal arrows goes the other way around—philosophers and economists express and give shape to a sentiment that already exists formless in the zeitgeist, due to various circumstances changing and thus causing a corresponding cultural shift. There is a feedback loop there, but generally speaking, the idea that we can even have intentional “conversations” about these things and somehow steer them very meaningfully seems more wishful thinking than reality to me.
It generally goes that Scientist Invents Thing, unleashes it into the world, and then everything inevitably and chaotically slides towards the natural equilibrium point of the new regime.
I broadly agree with this, though I’ll state 2 things:
Limited steering ability doesn’t equal 0 steering ability, and while there’s an argument to be made that people overestimate how much you can do with pure social engineering, I do still think there can be multiple equilibrium points, even if a lot of what happens is ultimately controlled by incentives.
AIs probably have a much easier time coordinating on what to do, and importantly can route around a lot of the bottlenecks that exist in human societies solely due to copying, merging and scaling, so assuming alignment is achieved, it’s very possible for single humans to do large scale social change by controlling the economy and military, and working your way from there.