Imagine for a moment you have a powerful AI that is aligned with your particular interests.
In areas where the AI is uncertain of your wants, it may query you as to your preferences in a given situation. But these queries will be “expensive” in the sense that you are a meat computer that runs slowly, and making copies of you is difficult.
So in order to carry out your interests at any kind of scale with speed, it will need to develop an increasingly robust model of your preferences.
Human values are context-dependent (see shard theory and other posts on this topic), so accurately modeling one’s preferences across a broad range of environments will require capturing a large portion of one’s memories and experiences, since those things affect how one responds to certain stimuli.
In the limit, this internal “model” in the AI will be an upload. So my current model is that we just get brain uploading by default if we create aligned AGI.
DE-FACTO UPLOADING
Imagine for a moment you have a powerful AI that is aligned with your particular interests.
In areas where the AI is uncertain of your wants, it may query you as to your preferences in a given situation. But these queries will be “expensive” in the sense that you are a meat computer that runs slowly, and making copies of you is difficult.
So in order to carry out your interests at any kind of scale with speed, it will need to develop an increasingly robust model of your preferences.
Human values are context-dependent (see shard theory and other posts on this topic), so accurately modeling one’s preferences across a broad range of environments will require capturing a large portion of one’s memories and experiences, since those things affect how one responds to certain stimuli.
In the limit, this internal “model” in the AI will be an upload. So my current model is that we just get brain uploading by default if we create aligned AGI.