I agree a bunch of different arrangements of memory / identity / “self” seem possible here, and lots of different kinds of syncing that might or might not preserve some kind of goals or cordination, depending on details.
I think this is interesting because some verrrry high level gut feelings / priors seem to tilt whether you think there’s going to be a lot of pressure towards merging or syncing.
Consider—recall Gwern’s notion of evolution as a backstop for intelligence; or the market as a backstop for corporate efficiency. If you buy something like Nick Land, where intelligence has immense difficulty standing by itself without natural selection atop it, and does not stand alone and supreme among optimizers—then there might be negative pressure indeed towards increasing consolidation of memory and self into unity, because this decreases the efficacy of the outer optimizer, which requires diversity. But if you buy Yudkowsky, where intelligence is supreme among optimizers and needs no other god or outer optimizer to stand upon, then you might have great positive pressure towards increasing consolidation of memory and self.
You could work out the above, of course, with more concrete references to pros and cons, from the perspective of various actors, rather than high level priors. But I’m somewhat unconvinced that something other than very high level priors is what are actually making up people’s minds :)
For what it’s worth, I basically don’t think that whether intelligence needs a backstop onto something else like natural selection or markets matters for whether we should expect AIs to have a unified self and long-term memory.
Indeed, humans are a case where our intelligence is a backstop for evolution/natural selection, and yet long-term unified selves and memories are present (not making any claims on whether the backstop is necessary).
The main reason a long-term memory is useful for both AIs and humans, and why I expect AIs to have long-term memories is because this allows them to learn tasks over time, especially when large context is required.
Indeed, I have come to share @lc’s concern that a lot of tasks where AI succeeds are tasks where history/long context doesn’t matter, and thus can be solved without memory, but unlike previous tasks, lots of tasks IRL are tasks where history/long context matters, and if you have memory, you can have a decreasing rate of failure like humans, up until your reliability limit:
I agree a bunch of different arrangements of memory / identity / “self” seem possible here, and lots of different kinds of syncing that might or might not preserve some kind of goals or cordination, depending on details.
I think this is interesting because some verrrry high level gut feelings / priors seem to tilt whether you think there’s going to be a lot of pressure towards merging or syncing.
Consider—recall Gwern’s notion of evolution as a backstop for intelligence; or the market as a backstop for corporate efficiency. If you buy something like Nick Land, where intelligence has immense difficulty standing by itself without natural selection atop it, and does not stand alone and supreme among optimizers—then there might be negative pressure indeed towards increasing consolidation of memory and self into unity, because this decreases the efficacy of the outer optimizer, which requires diversity. But if you buy Yudkowsky, where intelligence is supreme among optimizers and needs no other god or outer optimizer to stand upon, then you might have great positive pressure towards increasing consolidation of memory and self.
You could work out the above, of course, with more concrete references to pros and cons, from the perspective of various actors, rather than high level priors. But I’m somewhat unconvinced that something other than very high level priors is what are actually making up people’s minds :)
For what it’s worth, I basically don’t think that whether intelligence needs a backstop onto something else like natural selection or markets matters for whether we should expect AIs to have a unified self and long-term memory.
Indeed, humans are a case where our intelligence is a backstop for evolution/natural selection, and yet long-term unified selves and memories are present (not making any claims on whether the backstop is necessary).
The main reason a long-term memory is useful for both AIs and humans, and why I expect AIs to have long-term memories is because this allows them to learn tasks over time, especially when large context is required.
Indeed, I have come to share @lc’s concern that a lot of tasks where AI succeeds are tasks where history/long context doesn’t matter, and thus can be solved without memory, but unlike previous tasks, lots of tasks IRL are tasks where history/long context matters, and if you have memory, you can have a decreasing rate of failure like humans, up until your reliability limit:
https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=vFq87Ge27gashgwy9