As I said on SL4, where you also posted this, your point 3 is simply wrong:
Most states in human history, including most now existing, are pretty much the definition of unfriendly. That there has never been a case yet where people’s rule-of-thumb attempts to “describe good, safe human life using a system of rules” haven’t led to the death, imprisonment and in many cases torture of many, many people, seems to me one of the stronger arguments against a rule-based system.
But creating of state laws was- at least partly—an attempt to create friendly state. And we should use millions of human day of work spent on the same goal—creating goal system for nonhuman object.
I know. My point was that were the OP’s statement true (that states are an attempt to create Friendly AI) - which I don’t think it is—that it would be a blatantly, obviously, failed attempt.
CIA Superior: What did we learn, Palmer? CIA Officer: I don’t know, sir. CIA Superior: I don’t fuckin’ know either. I guess we learned not to do it again. CIA Officer: Yes, sir.
CIA Superior: What did we learn, Palmer?
CIA Officer: I don’t know, sir.
CIA Superior: I don’t fuckin’ know either. I guess we learned not to do it again.
CIA Officer: Yes, sir.
Agreed. If we could define a friendly AI now, then by point 3 we would also already be able to define a perfectly functional and just state (even if putting it into practice hadn’t happened yet).
As I said on SL4, where you also posted this, your point 3 is simply wrong:
Most states in human history, including most now existing, are pretty much the definition of unfriendly. That there has never been a case yet where people’s rule-of-thumb attempts to “describe good, safe human life using a system of rules” haven’t led to the death, imprisonment and in many cases torture of many, many people, seems to me one of the stronger arguments against a rule-based system.
But creating of state laws was- at least partly—an attempt to create friendly state. And we should use millions of human day of work spent on the same goal—creating goal system for nonhuman object.
Friendliness is characterization of goal systems, not states of the world.
I know. My point was that were the OP’s statement true (that states are an attempt to create Friendly AI) - which I don’t think it is—that it would be a blatantly, obviously, failed attempt.
We should learn from this attempt
This sums up my thoughts:
-- Burn After Reading
Burn After Reading
Agreed. If we could define a friendly AI now, then by point 3 we would also already be able to define a perfectly functional and just state (even if putting it into practice hadn’t happened yet).