I get that your argument is essentially as follows:
1.) Solving the problem of what values to put into an ai, even given the other technical issues being solved, is impossibly difficult in real life.
2.) To prove the problem’s impossible difficulty, here’s a much kinder version of reality where the problem still remains impossible.
I don’t think you did 2, and it requires me to already accept 1 is true, which I think it probably isn’t, and I think that most would agree with me on this point, at least in principle.
Which of these four things do you disagree with?
I don’t disagree with any of them. I doubt there’s a convincing argument that could get me to disagree with any of those as presented.
What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left.
I’m not clever enough to figure out what the solution is mind you, nor am I especially confident that someone else is necessarily going to. Please don’t confuse me for someone who doesn’t often worry about these things.
What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left.
OK, cool, I mean “just not building the AI” is a good way to avoid doom, and that still seems at least possible, so we’re maybe on the same page there.
And I think you got what I was trying to say, solving 1 and/or 2 can’t be done iteratively or by patching together a huge list of desiderata. We have to solve philosophy somehow, without superintelligent help. As I say, that looks like the harder part to me.
Please don’t confuse me for someone who doesn’t often worry about these things.
I get that your argument is essentially as follows:
1.) Solving the problem of what values to put into an ai, even given the other technical issues being solved, is impossibly difficult in real life.
2.) To prove the problem’s impossible difficulty, here’s a much kinder version of reality where the problem still remains impossible.
I don’t think you did 2, and it requires me to already accept 1 is true, which I think it probably isn’t, and I think that most would agree with me on this point, at least in principle.
I don’t disagree with any of them. I doubt there’s a convincing argument that could get me to disagree with any of those as presented.
What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left.
I’m not clever enough to figure out what the solution is mind you, nor am I especially confident that someone else is necessarily going to. Please don’t confuse me for someone who doesn’t often worry about these things.
OK, cool, I mean “just not building the AI” is a good way to avoid doom, and that still seems at least possible, so we’re maybe on the same page there.
And I think you got what I was trying to say, solving 1 and/or 2 can’t be done iteratively or by patching together a huge list of desiderata. We have to solve philosophy somehow, without superintelligent help. As I say, that looks like the harder part to me.
I promise I’ll try not to!