It might be interesting to spend some time trying to construct systems that wouldn’t kill themselves because of the dynamics you describe, or that would kill themselves only extremely slowly (maybe it’s good to think of this in terms of how much stuff you can get done or how much tech development you can do without killing yourself, so that we don’t consider just slowing down the pace of everything uniformly a win).[1] In particular, I think it is interesting to consider configurations of the following form:
Some options of what to have for the initial “living” core:
an individual human
a small group including you and your most careful + [philosophically competent] + [generally competent] friends
current humanity but with everyone below 160 iq lifted to 160 iq
Some options of what artifacts to grant them:
all the usual technologies available in 2026, and textbooks explaining how to make these in detail
cures to like the 10000 most important diseases and kinds of aging, and understanding and methods for creating new cures if necessary
a device that lets you save a state of a human and then construct another copy with that state later (e.g. the individual human could save a copy at age 20 and have a policy of constructing a new young clone each time the previous one gets old)
lots of various resources in easily usable form
Some options for “semi-living” components to set up:
various choices of initial governance conventions: some sort of democracy, maintaining and enforcing some sort of system of principles
various choices of initial epistemic infrastructure: a forum, a prediction market system, educational institutions and practices
some convention for resolving disputes
some attempt to tie actions to the best available thinking more. implementing some stronger conservatism around technologies and other major actions maybe
some practices that are supposed to help with not going crazy and not getting depressed and not becoming a radical negative utilitarian and other mental health stuff
some mechanisms fighting against the system being subverted
My guess is that we can identify an initial configuration in this space such that the system probably doesn’t kill itself for a long time (like let’s say for at least a thousand years’ worth of getting stuff done or technological development at the 2025 pace[2]).
like for now i mean: “construct” conceptually, like “construct” in the mathematician’s sense, not in practice. though constructing such a system in practice is of course also very interesting and important
note that this is a decent amount of development/[doing stuff] despite corresponding to only 1000 years — plausibly more than the sum total of development/[doing stuff] in our galaxy’s history so far, given how much things have sped up
It might be interesting to spend some time trying to construct systems that wouldn’t kill themselves because of the dynamics you describe, or that would kill themselves only extremely slowly (maybe it’s good to think of this in terms of how much stuff you can get done or how much tech development you can do without killing yourself, so that we don’t consider just slowing down the pace of everything uniformly a win). [1] In particular, I think it is interesting to consider configurations of the following form:
Some options of what to have for the initial “living” core:
an individual human
a small group including you and your most careful + [philosophically competent] + [generally competent] friends
current humanity but with everyone below 160 iq lifted to 160 iq
Some options of what artifacts to grant them:
all the usual technologies available in 2026, and textbooks explaining how to make these in detail
cures to like the 10000 most important diseases and kinds of aging, and understanding and methods for creating new cures if necessary
a device that lets you save a state of a human and then construct another copy with that state later (e.g. the individual human could save a copy at age 20 and have a policy of constructing a new young clone each time the previous one gets old)
lots of various resources in easily usable form
Some options for “semi-living” components to set up:
various choices of initial governance conventions: some sort of democracy, maintaining and enforcing some sort of system of principles
various choices of initial epistemic infrastructure: a forum, a prediction market system, educational institutions and practices
some convention for resolving disputes
some attempt to tie actions to the best available thinking more. implementing some stronger conservatism around technologies and other major actions maybe
some practices that are supposed to help with not going crazy and not getting depressed and not becoming a radical negative utilitarian and other mental health stuff
some mechanisms fighting against the system being subverted
My guess is that we can identify an initial configuration in this space such that the system probably doesn’t kill itself for a long time (like let’s say for at least a thousand years’ worth of getting stuff done or technological development at the 2025 pace [2] ).
Also see Yudkowsky’s world.
like for now i mean: “construct” conceptually, like “construct” in the mathematician’s sense, not in practice. though constructing such a system in practice is of course also very interesting and important
note that this is a decent amount of development/[doing stuff] despite corresponding to only 1000 years — plausibly more than the sum total of development/[doing stuff] in our galaxy’s history so far, given how much things have sped up
Thanks! These are interesting thoughts to think about indeed, and I will do that!