The work of Ashby I’m familiar with is “An Introduction to Cybernetics” and I’m referring to the discussion in Chapter 11 there. The references you’re giving seem to be invoking the “Law” of requisite variety in the context of arguing that an AGI has to be relatively complex in order to maintain homeostatis in a complex environment, but this isn’t the application of the law I have in mind.
From the book:
The law of Requisite Variety says that R’s capacity as a regulator cannot exceed R’s capacity as a channel of communication.
In the form just given, the law of Requisite Variety can be shown in exact relation to Shannon’s Theorem 10, which says that if noise appears in a message, the amount of noise that can be removed by a correction channel is limited to the amount of information that can be carried by that channel.
Thus, his “noise” corresponds to our “disturbance”, his “correction channel” to our “regulator R”, and his “message of entropy H” becomes, in our case, a message of entropy zero, for it is constancy that is to be “transmitted”: Thus the use of a regulator to achieve homeostasis and the use of a correction channel to suppress noise are homologous.
and
A species continues to exist primarily because its members can block the flow of variety (thought of as disturbance) to the gene-pattern, and this blockage is the species’ most fundamental need. Natural selection has shown the advantage to be gained by taking a large amount of variety (as information) partly into the system (so that it does not reach the gene-pattern) and then using this information so that the flow via R blocks the flow through the environment T.
This last quote makes clear I think what I have in mind: the environment is full of advanced AIs, they provide disturbances D, and in order to regulate the effects of those disturbances on our “cognitive genetic material” there is some requirement on the “correction channel”. Maybe this seems a bit alien to the concept of control. There’s a broader set of ideas I’m toying with, which could be summarised as something like “reciprocal control” where you have these channels of communication / regulation going in both directions (from human to machine, and vice versa).
The Queen’s Dilemma was a little piece of that picture, which attempts to illustrate this bi-directional control flow by having the human control the machine (by setting its policy, say) and the machine control the human (in an emergent fashion, that being the dilemma).
The work of Ashby I’m familiar with is “An Introduction to Cybernetics” and I’m referring to the discussion in Chapter 11 there. The references you’re giving seem to be invoking the “Law” of requisite variety in the context of arguing that an AGI has to be relatively complex in order to maintain homeostatis in a complex environment, but this isn’t the application of the law I have in mind.
From the book:
and
This last quote makes clear I think what I have in mind: the environment is full of advanced AIs, they provide disturbances D, and in order to regulate the effects of those disturbances on our “cognitive genetic material” there is some requirement on the “correction channel”. Maybe this seems a bit alien to the concept of control. There’s a broader set of ideas I’m toying with, which could be summarised as something like “reciprocal control” where you have these channels of communication / regulation going in both directions (from human to machine, and vice versa).
The Queen’s Dilemma was a little piece of that picture, which attempts to illustrate this bi-directional control flow by having the human control the machine (by setting its policy, say) and the machine control the human (in an emergent fashion, that being the dilemma).