I think that working towards friendly AI will in most cases lead to negative utility scenarios that vastly outweigh the negative utility of an attempt that creating a simple transformer that turns the universe into an inanimate state.
I think it’s too early to decide this. There are many questions whose answers will become clearer before we have to make a choice one way or another. If eventually it becomes clear that building an antinatalist AI is the right thing to do, I think the best way to accomplish it would be through an organization that’s like SIAI but isn’t too attached to the idea of FAI and just wants to do whatever is best.
Now you can either try to build an organization like that from scratch, or try to push SIAI in that direction (i.e., make it more strategic and less attached to a specific plan). Of course, being lazy, I’m more tempted to do the latter, but your miles may vary. :)
If eventually it becomes clear that building an antinatalist AI is the right thing to do, I think the best way to accomplish it would be through an organization that’s like SIAI but isn’t too attached to the idea of FAI and just wants to do whatever is best.
Yes.
I, for one, am ultimately concerned with doing whatever’s best. I’m not wedded to doing FAI, and am certainly not wedded to doing 9-researchers-in-a-basement FAI.
I, for one, am ultimately concerned with doing whatever’s best. I’m not wedded to doing FAI, and am certainly not wedded to doing 9-researchers-in-a-basement FAI.
Well, that’s great. Still, there are quite a few problems.
How do I know
… that SI does not increase existential risk by solving problems that can be used to build AGI earlier?
… that you won’t launch a half-baked friendly AI that will turn the world into a hell?
… that you don’t implement some strategies that will do really bad things to some people, e.g. myself?
Every time I see a video of one of you people I think, “Wow, those seem like really nice people. I am probably wrong. They are going to do the right thing.”
But seriously, is that enough? Can I trust a few people with the power to shape the whole universe? Can I trust them enough to actually give them money? Can I trust them enough with my life until the end of the universe?
You can’t even tell me what “best” or “right” or “winning” stands for. How do I know that it can be or will be defined in a way that those labels will apply to me as well?
I have no idea what your plans are for the day when time runs out. I just hope that you are not going to hope for the best and run some not quite friendly AI that does really crappy things. I hope you consider the possibility of rather blowing everything up than risking even worse outcomes.
an organization that’s like SIAI but isn’t too attached to a specific kind of FAI design (that may be too complex and prone to fail in particularly horrible ways), and just wants to do whatever is best.
Do you think SingInst is too attached to a specific kind of FAI design? This isn’t my impression. (Also, at this point, it might be useful to unpack “SingInst” into particular people constituting it.)
Do you think SingInst is too attached to a specific kind of FAI design?
XiXiDu seems to think so. I guess I’m less certain but I didn’t want to question that particular premise in my response to him.
It does confuse me that Eliezer set his focus so early on CEV. I think “it’s too early to decide this” applies to CEV just as well as XiXiDu’s anti-natalist AI. Why not explore and keep all the plausible options open until the many strategically important questions become clearer? Why did it fall to someone outside SIAI (me, in particular) to write about the normative and meta-philosophical approaches to FAI? (Note that the former covers XiXiDu’s idea as a special case.) Also concerning is that many criticisms have been directed at CEV but Eliezer seems to ignore most of them.
Also, at this point, it might be useful to unpack “SingInst” into particular people constituting it.
I’d be surprised if there weren’t people within SingInst who disagree with the focus on CEV, but if so, they seem reluctant to disagree in public so it’s hard to tell who exactly, or how much say they have in what SingInst actually does.
I guess this could all be due to PR considerations. Maybe Eliezer just wanted to focus public attention on CEV because it’s the politically least objectionable FAI approach, and isn’t really terribly attached to the idea when it comes to actually building an FAI. But you can see how an outsider might get that impression...
Yeah, I thought it was explicitly intended more as a political manifesto than a philosophical treatise. I have no idea why so many smart people, like lukeprog, seem to be interpreting it not only as a philosophical basis but as outlining a technical solution.
I think it’s too early to decide this. There are many questions whose answers will become clearer before we have to make a choice one way or another. If eventually it becomes clear that building an antinatalist AI is the right thing to do, I think the best way to accomplish it would be through an organization that’s like SIAI but isn’t too attached to the idea of FAI and just wants to do whatever is best.
Now you can either try to build an organization like that from scratch, or try to push SIAI in that direction (i.e., make it more strategic and less attached to a specific plan). Of course, being lazy, I’m more tempted to do the latter, but your miles may vary. :)
Yes.
I, for one, am ultimately concerned with doing whatever’s best. I’m not wedded to doing FAI, and am certainly not wedded to doing 9-researchers-in-a-basement FAI.
Well, that’s great. Still, there are quite a few problems.
How do I know
… that SI does not increase existential risk by solving problems that can be used to build AGI earlier?
… that you won’t launch a half-baked friendly AI that will turn the world into a hell?
… that you don’t implement some strategies that will do really bad things to some people, e.g. myself?
Every time I see a video of one of you people I think, “Wow, those seem like really nice people. I am probably wrong. They are going to do the right thing.”
But seriously, is that enough? Can I trust a few people with the power to shape the whole universe? Can I trust them enough to actually give them money? Can I trust them enough with my life until the end of the universe?
You can’t even tell me what “best” or “right” or “winning” stands for. How do I know that it can be or will be defined in a way that those labels will apply to me as well?
I have no idea what your plans are for the day when time runs out. I just hope that you are not going to hope for the best and run some not quite friendly AI that does really crappy things. I hope you consider the possibility of rather blowing everything up than risking even worse outcomes.
Hell no.
This is an open problem. See “How can we be sure a Friendly AI development team will be altruistic?” on my list of open problems.
Blowing everying up would be pretty bad. Bad enough to not encourage the possibility.
“Would you murder a child, if it’s the right thing to do?”
If FAI is by definition a machine that does whatever is best, this distinction doesn’t seem meaningful.
Ok, let me rephrase that to be clearer.
Do you think SingInst is too attached to a specific kind of FAI design? This isn’t my impression. (Also, at this point, it might be useful to unpack “SingInst” into particular people constituting it.)
XiXiDu seems to think so. I guess I’m less certain but I didn’t want to question that particular premise in my response to him.
It does confuse me that Eliezer set his focus so early on CEV. I think “it’s too early to decide this” applies to CEV just as well as XiXiDu’s anti-natalist AI. Why not explore and keep all the plausible options open until the many strategically important questions become clearer? Why did it fall to someone outside SIAI (me, in particular) to write about the normative and meta-philosophical approaches to FAI? (Note that the former covers XiXiDu’s idea as a special case.) Also concerning is that many criticisms have been directed at CEV but Eliezer seems to ignore most of them.
I’d be surprised if there weren’t people within SingInst who disagree with the focus on CEV, but if so, they seem reluctant to disagree in public so it’s hard to tell who exactly, or how much say they have in what SingInst actually does.
I guess this could all be due to PR considerations. Maybe Eliezer just wanted to focus public attention on CEV because it’s the politically least objectionable FAI approach, and isn’t really terribly attached to the idea when it comes to actually building an FAI. But you can see how an outsider might get that impression...
I always thought CEV was half-baked as a technical solution, but as a PR tactic it is...genius.
Yeah, I thought it was explicitly intended more as a political manifesto than a philosophical treatise. I have no idea why so many smart people, like lukeprog, seem to be interpreting it not only as a philosophical basis but as outlining a technical solution.