You are arguing that it is tractable to have predictable positive long term effects using something that is known to be imperfect (heuristic ethics). For that to make sense you would have to justify why small imperfections cannot possibly grow into large problems. It’s like saying that because you believe that you only have a small flaw in your computer security nobody could ever break in and steal all of your data. This wouldn’t be true even if you knew what the flaw was and, with heuristic ethics, you don’t even know that.
That doesn’t follow. It’s more like saying “password systems help protect accounts” even though you know those systems are imperfect. Sure, people keep reusing the same passwords and using passwords that are guessable, but that doesn’t mean not using passwords at all and taking people at their word for who they are is superior (in most systems that need accounts)
The minimal standard is “using this system / heuristic is better than not using it”, not “this system / heuristic is flawless and solves all problems ever”.
In a general discussion of ethics your replies are very sensible. When discussing AI safety, and, in particular P(doom), they are not. Your analogy does not work. It is effectively saying trying to prevent AI from killing us all by blocking its access to the internet with a password is better than not using a password, but an AI that is a threat to us will not be stopped by a password and neither will it be stopped by an imperfect heuristic. If we don’t have 100% certainty, we should not build it.
Sorry, I’d like to understand you but I don’t yet; what claim do you think I’m making that seems totally misguided, please?
You are arguing that it is tractable to have predictable positive long term effects using something that is known to be imperfect (heuristic ethics). For that to make sense you would have to justify why small imperfections cannot possibly grow into large problems. It’s like saying that because you believe that you only have a small flaw in your computer security nobody could ever break in and steal all of your data. This wouldn’t be true even if you knew what the flaw was and, with heuristic ethics, you don’t even know that.
That doesn’t follow. It’s more like saying “password systems help protect accounts” even though you know those systems are imperfect. Sure, people keep reusing the same passwords and using passwords that are guessable, but that doesn’t mean not using passwords at all and taking people at their word for who they are is superior (in most systems that need accounts)
The minimal standard is “using this system / heuristic is better than not using it”, not “this system / heuristic is flawless and solves all problems ever”.
In a general discussion of ethics your replies are very sensible. When discussing AI safety, and, in particular P(doom), they are not. Your analogy does not work. It is effectively saying trying to prevent AI from killing us all by blocking its access to the internet with a password is better than not using a password, but an AI that is a threat to us will not be stopped by a password and neither will it be stopped by an imperfect heuristic. If we don’t have 100% certainty, we should not build it.