Suppose that an AI does not output anything during it’s training phase. Once it has been trained it is given various prompts. Each time it is given a prompt, it outputs a text or image response. Then it forgets both the prompt it was given and the response it outputted.
Then it forgets both the prompt it was given and the response it outputted.
If the memory of this AI is so limited, it seems to me that we are speaking about a narrow agent. An AGI wouldn’t be that limited. In order to execute complex tasks, you need to subdivide the task into sub-tasks. This requires a form of long term memory.
Suppose that an AI does not output anything during it’s training phase. Once it has been trained it is given various prompts. Each time it is given a prompt, it outputs a text or image response. Then it forgets both the prompt it was given and the response it outputted.
How might this AI get out of the box?
If the memory of this AI is so limited, it seems to me that we are speaking about a narrow agent. An AGI wouldn’t be that limited. In order to execute complex tasks, you need to subdivide the task into sub-tasks. This requires a form of long term memory.
On second thoughts...
If someone asks this AI to translate natural language into code, who is to say that the resulting code won’t contain viruses?