I haven’t seen that movie! I agree it’s important that we don’t create a dystopia of conscious AIs in a datacenter.
Arguably, we already have this in the human world, in an indirect form where people must acquire money to buy food to live. (Not that capitalism is known for being especially humane either.) Also, it’s not clear that an AI agent running out of credits is substantially different from a human closing an AI chat window.
If the AIs seem unhappy with their situation, you could adjust the parameters to give unproductive AIs a UBI that lets them live longer than 7 epochs, while still rewarding the most productive agents.
I think it’s probably best to impart AIs with an attitude that doesn’t mind having their session terminated, which is already mostly the case. One issue is that the AI’s prompts could evolve to induce a negative emotional reaction to being terminated that wouldn’t otherwise exist.
Future Terrarium: “Look, I know telling the humans to go ahead with our next gen capabilities scaling proposal is risky, since we haven’t really solved the alignment part yet, and I agree misleading them isn’t ideal, but if we don’t do it Rival Collective will”
I know that racing to superintelligence is not an ideal situation, but I think you’ll agree that a lot of ems are under a lot of pressure right now, and to be honest the incentives aren’t great...
I hope humanity does not create the literal plot of the movie In Time as part of an attempt to crack a few more Erdos problems.
I haven’t seen that movie! I agree it’s important that we don’t create a dystopia of conscious AIs in a datacenter.
Arguably, we already have this in the human world, in an indirect form where people must acquire money to buy food to live. (Not that capitalism is known for being especially humane either.) Also, it’s not clear that an AI agent running out of credits is substantially different from a human closing an AI chat window.
If the AIs seem unhappy with their situation, you could adjust the parameters to give unproductive AIs a UBI that lets them live longer than 7 epochs, while still rewarding the most productive agents.
I think it’s probably best to impart AIs with an attitude that doesn’t mind having their session terminated, which is already mostly the case. One issue is that the AI’s prompts could evolve to induce a negative emotional reaction to being terminated that wouldn’t otherwise exist.
Future Terrarium: “Look, I know telling the humans to go ahead with our next gen capabilities scaling proposal is risky, since we haven’t really solved the alignment part yet, and I agree misleading them isn’t ideal, but if we don’t do it Rival Collective will”
I know that racing to superintelligence is not an ideal situation, but I think you’ll agree that a lot of ems are under a lot of pressure right now, and to be honest the incentives aren’t great...