But I do have the sense that, at least in the case of party slogans, it is about what the priorities are and who executes them and the detailed implementation is usually a separate question.
I don’t understand why this helps. Who executes a priority, and what exactly a priority is, seem greatly correlated with the space of detailed implementation of a policy. Look at what happened with Drexlerian nanotech: the term got hijacked by people who called their pre-existing work nanotech in order to obtain resources from the US government which were earmarked for “nanotech”. Why wouldn’t something similair happen for AI not-kill-everyoneism? People argue over what exactly the priority is (“the AI must have chinese characteristics” vs. “the AI must be rewarded for having chinese characterisitcs and obeying the law”) and who executes it (curious, brilliant people who can work on the core of the problem vs bureaucrat clout-chasers). So what if the detailed implementaion is a seperate question? The front has already collapsed.
I admit to that I don’t see what has made you excited about this idea, and understand if you don’t want to spend the effort conveying it at the moment. And I also admit to being confused: I realized that part of where the nanotech-AI analogy might fail is in the pressures US vs. Chinese politicians face, and how the battle over priorities are fought. Another area it might fail is that I don’t in what context “sloganeering” is done. Who is the audience for this? How does the existence of a dictator like Xi affect things? I’ve not really thought about it.
Another area it might fail is that I don’t in what context “sloganeering” is done. Who is the audience for this? How does the existence of a dictator like Xi affect things?
This is the crux of the matter, I think: the slogans to which I am pointing are those used inside the communist party of China for the purposes of coordinating the party members and bureaucrats, who are the audience. Xi has introduced several of the slogans in current use, and has tried and failed to introduce others. That is to say, they are how the Chinese government talks to itself, and Xi is at the center of the conversation.
I focused on the slogans because I have some clue how this system works, but don’t have a notion about Chinese language in general, or Chinese culture in general, or the technical culture specifically. So all I’ve done here is take the idea “alignment should be more of a priority in China” and the idea “I know one way the Chinese government talks about priorities” and bashed ’em together like a toddler making their dolls kiss.
The challenge is the part that is exciting to me, frankly. Communicating an important problem across cultural lines is hard, and impressive when done well, and provides me a certain aesthetic pleasure. It is definitely not the case that I have analyzed the problem at length, or done similar things before and concluded on priors that this will be an effective method.
Edit: putting the slogans into a more LessWrong context, tifa are directly a solution to the problem described n You Get About Five Words.
I don’t understand why this helps. Who executes a priority, and what exactly a priority is, seem greatly correlated with the space of detailed implementation of a policy. Look at what happened with Drexlerian nanotech: the term got hijacked by people who called their pre-existing work nanotech in order to obtain resources from the US government which were earmarked for “nanotech”. Why wouldn’t something similair happen for AI not-kill-everyoneism? People argue over what exactly the priority is (“the AI must have chinese characteristics” vs. “the AI must be rewarded for having chinese characterisitcs and obeying the law”) and who executes it (curious, brilliant people who can work on the core of the problem vs bureaucrat clout-chasers). So what if the detailed implementaion is a seperate question? The front has already collapsed.
I admit to that I don’t see what has made you excited about this idea, and understand if you don’t want to spend the effort conveying it at the moment. And I also admit to being confused: I realized that part of where the nanotech-AI analogy might fail is in the pressures US vs. Chinese politicians face, and how the battle over priorities are fought. Another area it might fail is that I don’t in what context “sloganeering” is done. Who is the audience for this? How does the existence of a dictator like Xi affect things? I’ve not really thought about it.
This is the crux of the matter, I think: the slogans to which I am pointing are those used inside the communist party of China for the purposes of coordinating the party members and bureaucrats, who are the audience. Xi has introduced several of the slogans in current use, and has tried and failed to introduce others. That is to say, they are how the Chinese government talks to itself, and Xi is at the center of the conversation.
I focused on the slogans because I have some clue how this system works, but don’t have a notion about Chinese language in general, or Chinese culture in general, or the technical culture specifically. So all I’ve done here is take the idea “alignment should be more of a priority in China” and the idea “I know one way the Chinese government talks about priorities” and bashed ’em together like a toddler making their dolls kiss.
The challenge is the part that is exciting to me, frankly. Communicating an important problem across cultural lines is hard, and impressive when done well, and provides me a certain aesthetic pleasure. It is definitely not the case that I have analyzed the problem at length, or done similar things before and concluded on priors that this will be an effective method.
Edit: putting the slogans into a more LessWrong context, tifa are directly a solution to the problem described n You Get About Five Words.