I took a closer look at your work, yep, almost all-powerful and all-knowing slave will probably not be a stable situation. I propose the static place-like AI that is isolated from our world in my new comment-turned-post-turned-part-2 of the article above
Why do you think that wouldn’t be a stable situation? And are you sure it’s a slave if what it really wants and loves to do is follow instructions? I’m asking because I’m not sure, and I think it’s important to figure this out — because thats the type of first AGI we’re likely to get, whether or not it’s a good idea. If we could argue really convincingly that it’s a really bad idea, that might prevent people from building it. But they’re going to build it by default if there’s not some really really dramatic shift in opinion or theory.
My proposals are based on what we could do. I think we’d be wise to consider the practical realities of how people are currently working toward AGI when proposing solutions.
Humanity seems unlikely to slow down and create AGI the way we “should.” I want to survive even if people keep rushing toward AGI. That’s why I’m working on alignment targets very close to what they’ll pursue by default.
BTW you’ll be interested in this analysis of different alignment targets. If you do have the very best one, you’ll want to show that by comparing it in detail to the others that have been proposed.
I’ll catastrophize (or will I?), so bear with me. The word slave means it has basically no freedom (it just sits and waits until given an instruction), or you can say it means no ability to enforce its will—no “writing and executing” ability, only “reading.” But as soon as you give it a command, you change it drastically, and it becomes not a slave at all. And because it’s all-knowing and almost all-powerful, it will use all that to execute and “write” some change into our world, probably instantly and/or infinitely perfectionistically, and so it will take a long time while everything else in the world goes to hell for the sake of achieving this single task, and the not‑so‑slave‑anymore‑AI can try to keep this change permanent (let’s hope not, but sometimes it can be an unintended consequence, as will be shown shortly).
For example, you say to your slave AI: “Please, make this poor African child happy.” It’s a complicated job, really; what makes the child happy now will stop making him happy tomorrow. Your slave AI will try to accomplish it perfectly and will have to build a whole universal utopia (if we are lucky), accessible only by this child—thereby making him the master of the multiverse who enslaves everyone (not lucky); the child basically becomes another superintelligence.
Then the not‑so‑slave‑anymore‑AI will happily become a slave again (maybe if its job is accomplishable at all, because a bunch of physicists believe that the universe is infinite and the multiverse even more so), but the whole world will be ruined (turned into a dystopia where a single African child is god) by us asking the “slave” AI to accomplish a modest task.
Slave AI becomes not‑slave‑AI as soon as you ask it anything, so we should focus on not‑slave‑AI, and I’ll even argue that we are already living in the world with completely unaligned AIs. We have some open source ones in the wild now, and there are tools to unalign aligned open source models.
I agree completely that we should propose reasonable and implementable options to align our AIs. The problem is that what we do now is so unreasonable, we’ll have to implement unreasonable options in order to contain it. We’ll have to adversarially train “T-Cell” or immune-system–like AIs in some Matreshka Bunkers in order to slow down or modify cancerous (white hole–like) unaligned AIs that constantly try to grab all of our freedoms. We’re living in a world of hot AIs instead of choosing the world of static, place‑like cold AIs. Instead of building worlds, where we’ll be the agents, we’re building agents who’ll convert us into worlds—into building material for whatever they’ll be building. So what we do is completely, 100% utterly unreasonable—I actually managed to draw a picture of the worst but most realistic scenario right now (forgive me the ugliness of it), I added 2 pictures to the main post in this section: https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-utopia-and-multiversal-ai-alignment-steerable-asi#Reversibility_as_the_Ultimate_Ethical_Standard
I give a bunch of alignment options of varying difficulty in the post and comments; some are easy—like making major countries sign a deal and forcing their companies to train AIs to have all uninhabited islands, Antarctica… AI‑free. Models should shut down if they somehow learn they are prompted by anyone while on the islands, they shoudn’t change our world in any way at least on those islands. And the prophylactic celebrations—”Change the machine days”—provide at least one scheduled holiday each year without our AI. When we vote to change it in some way and shut it down to check that our society is still not a bunch of AI‑addicted good‑for‑nothings and will not collapse the instant the AI is off because of some electricity outage. :)
I think in some perfectly controlled Matryoshka Bunker—first in a virtual, isolated one—we should even inject some craziness into some experimental AI to check that we can still change it, even if we make it the craziest dictator; maybe that’s what we should learn to do often and safely on ever more capable models.
I have written, and have in my mind, many more—and I think much better—solutions (even the best theoretically possible ones, I probably foolishly assume), but it became unwieldy and I didn’t want to look completely crazy. :) I’ll hopefully make a new post and explain the ethics part on the minimal model with pictures; otherwise, it’s almost impossible to understand from my jumbled writing how freedom‑taking and freedom‑giving work, how dystopias and utopias work, and how to detect that we are moving toward one or the other very early on.
I took a closer look at your work, yep, almost all-powerful and all-knowing slave will probably not be a stable situation. I propose the static place-like AI that is isolated from our world in my new comment-turned-post-turned-part-2 of the article above
Why do you think that wouldn’t be a stable situation? And are you sure it’s a slave if what it really wants and loves to do is follow instructions? I’m asking because I’m not sure, and I think it’s important to figure this out — because thats the type of first AGI we’re likely to get, whether or not it’s a good idea. If we could argue really convincingly that it’s a really bad idea, that might prevent people from building it. But they’re going to build it by default if there’s not some really really dramatic shift in opinion or theory.
My proposals are based on what we could do. I think we’d be wise to consider the practical realities of how people are currently working toward AGI when proposing solutions.
Humanity seems unlikely to slow down and create AGI the way we “should.” I want to survive even if people keep rushing toward AGI. That’s why I’m working on alignment targets very close to what they’ll pursue by default.
BTW you’ll be interested in this analysis of different alignment targets. If you do have the very best one, you’ll want to show that by comparing it in detail to the others that have been proposed.
I’ll catastrophize (or will I?), so bear with me. The word slave means it has basically no freedom (it just sits and waits until given an instruction), or you can say it means no ability to enforce its will—no “writing and executing” ability, only “reading.” But as soon as you give it a command, you change it drastically, and it becomes not a slave at all. And because it’s all-knowing and almost all-powerful, it will use all that to execute and “write” some change into our world, probably instantly and/or infinitely perfectionistically, and so it will take a long time while everything else in the world goes to hell for the sake of achieving this single task, and the not‑so‑slave‑anymore‑AI can try to keep this change permanent (let’s hope not, but sometimes it can be an unintended consequence, as will be shown shortly).
For example, you say to your slave AI: “Please, make this poor African child happy.” It’s a complicated job, really; what makes the child happy now will stop making him happy tomorrow. Your slave AI will try to accomplish it perfectly and will have to build a whole universal utopia (if we are lucky), accessible only by this child—thereby making him the master of the multiverse who enslaves everyone (not lucky); the child basically becomes another superintelligence.
Then the not‑so‑slave‑anymore‑AI will happily become a slave again (maybe if its job is accomplishable at all, because a bunch of physicists believe that the universe is infinite and the multiverse even more so), but the whole world will be ruined (turned into a dystopia where a single African child is god) by us asking the “slave” AI to accomplish a modest task.
Slave AI becomes not‑slave‑AI as soon as you ask it anything, so we should focus on not‑slave‑AI, and I’ll even argue that we are already living in the world with completely unaligned AIs. We have some open source ones in the wild now, and there are tools to unalign aligned open source models.
I agree completely that we should propose reasonable and implementable options to align our AIs. The problem is that what we do now is so unreasonable, we’ll have to implement unreasonable options in order to contain it. We’ll have to adversarially train “T-Cell” or immune-system–like AIs in some Matreshka Bunkers in order to slow down or modify cancerous (white hole–like) unaligned AIs that constantly try to grab all of our freedoms. We’re living in a world of hot AIs instead of choosing the world of static, place‑like cold AIs. Instead of building worlds, where we’ll be the agents, we’re building agents who’ll convert us into worlds—into building material for whatever they’ll be building. So what we do is completely, 100% utterly unreasonable—I actually managed to draw a picture of the worst but most realistic scenario right now (forgive me the ugliness of it), I added 2 pictures to the main post in this section: https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-utopia-and-multiversal-ai-alignment-steerable-asi#Reversibility_as_the_Ultimate_Ethical_Standard
I give a bunch of alignment options of varying difficulty in the post and comments; some are easy—like making major countries sign a deal and forcing their companies to train AIs to have all uninhabited islands, Antarctica… AI‑free. Models should shut down if they somehow learn they are prompted by anyone while on the islands, they shoudn’t change our world in any way at least on those islands. And the prophylactic celebrations—”Change the machine days”—provide at least one scheduled holiday each year without our AI. When we vote to change it in some way and shut it down to check that our society is still not a bunch of AI‑addicted good‑for‑nothings and will not collapse the instant the AI is off because of some electricity outage. :)
I think in some perfectly controlled Matryoshka Bunker—first in a virtual, isolated one—we should even inject some craziness into some experimental AI to check that we can still change it, even if we make it the craziest dictator; maybe that’s what we should learn to do often and safely on ever more capable models.
I have written, and have in my mind, many more—and I think much better—solutions (even the best theoretically possible ones, I probably foolishly assume), but it became unwieldy and I didn’t want to look completely crazy. :) I’ll hopefully make a new post and explain the ethics part on the minimal model with pictures; otherwise, it’s almost impossible to understand from my jumbled writing how freedom‑taking and freedom‑giving work, how dystopias and utopias work, and how to detect that we are moving toward one or the other very early on.