I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
This is not really a relevant example in the context of Vladimir_Nesov’s comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.
The suggestion to “develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.” that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
“Something like that” could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).
This is not really a relevant example in the context of Vladimir_Nesov’s comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.
The suggestion to “develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.” that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
“Something like that” could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).