Hrm… If you’re trying to optimize the external environment relative to present day humans, rather than what we may become, I’m not sure that will work.
What I mean is this: the types of improved “basic rules” we want are in a large part complicated criteria over “surface abstractions”, and lack lower level simplicity. In other words, the rules may end up being sufficiently complex that they effectively require intelligence.
Given that, if we DON’T make the interface in some sense personlike, we might end up with the horror of living in a world that’s effectively controlled by an alien mind, albeit one that’s a bit more friendly to us, for its own reasons. Sort of living in a “buddy cthulu” world, if you take my point.
You want to improve the basic rules, but would the improvements, taken as a whole, be sufficiently simple that we, as mostly (mentally) unmodified humans be able to as easily take those rules into account and optimize in that environment the way we do with, say, gravity, EM, etc?
If we want it to be intuitive and predictable, at least at the point where we’re still cognitively more or less the same as we are now, it might be better for it to at least seem like a person, since we’ve got all sorts of wiring in us that makes it easier for us to reason about people.
I understand why we may not want it to be an actual person, or to even seem like one. But let’s not go all happy death spiral on this. I think there may be a possible downside to keeping it too unpersonlike.
As for the thing about optimizing external environment before people’s minds, and tricky issues there. I simply, when thinking about that sort of thing, start with what kinds of changes I’d want to make in myself, given the opportunity (and a framework/knowledge/etc that helps me make sure the results would be what I really wanted, rather than basically slapping myself with a monkey’s paw or whatever.)
Hrm… If you’re trying to optimize the external environment relative to present day humans, rather than what we may become, I’m not sure that will work.
What I mean is this: the types of improved “basic rules” we want are in a large part complicated criteria over “surface abstractions”, and lack lower level simplicity. In other words, the rules may end up being sufficiently complex that they effectively require intelligence.
Given that, if we DON’T make the interface in some sense personlike, we might end up with the horror of living in a world that’s effectively controlled by an alien mind, albeit one that’s a bit more friendly to us, for its own reasons. Sort of living in a “buddy cthulu” world, if you take my point.
You want to improve the basic rules, but would the improvements, taken as a whole, be sufficiently simple that we, as mostly (mentally) unmodified humans be able to as easily take those rules into account and optimize in that environment the way we do with, say, gravity, EM, etc?
If we want it to be intuitive and predictable, at least at the point where we’re still cognitively more or less the same as we are now, it might be better for it to at least seem like a person, since we’ve got all sorts of wiring in us that makes it easier for us to reason about people.
I understand why we may not want it to be an actual person, or to even seem like one. But let’s not go all happy death spiral on this. I think there may be a possible downside to keeping it too unpersonlike.
As for the thing about optimizing external environment before people’s minds, and tricky issues there. I simply, when thinking about that sort of thing, start with what kinds of changes I’d want to make in myself, given the opportunity (and a framework/knowledge/etc that helps me make sure the results would be what I really wanted, rather than basically slapping myself with a monkey’s paw or whatever.)