This is totally misguided. If heuristics worked 100% of the time they wouldn’t be rules of thumb, they’d be rules of nature. We only have to be wrong once for AI to kill us.
You are arguing that it is tractable to have predictable positive long term effects using something that is known to be imperfect (heuristic ethics). For that to make sense you would have to justify why small imperfections cannot possibly grow into large problems. It’s like saying that because you believe that you only have a small flaw in your computer security nobody could ever break in and steal all of your data. This wouldn’t be true even if you knew what the flaw was and, with heuristic ethics, you don’t even know that.
This is totally misguided. If heuristics worked 100% of the time they wouldn’t be rules of thumb, they’d be rules of nature. We only have to be wrong once for AI to kill us.
Sorry, I’d like to understand you but I don’t yet; what claim do you think I’m making that seems totally misguided, please?
You are arguing that it is tractable to have predictable positive long term effects using something that is known to be imperfect (heuristic ethics). For that to make sense you would have to justify why small imperfections cannot possibly grow into large problems. It’s like saying that because you believe that you only have a small flaw in your computer security nobody could ever break in and steal all of your data. This wouldn’t be true even if you knew what the flaw was and, with heuristic ethics, you don’t even know that.