I would be really hesitant using these findings to frame Russian Communism, especially if we conclude that ‘peasant envy’ was a contributing factor in how horrific the Stalinist regime was. The Russian Revolution was won by workers in major industrial cities, whilst peasant uprisings, whilst present, had nowhere near the same effect as working class militancy on the Soviet Government and the state of Russian Society.
The wretched state of the peasantry and their attitudes toward another, if anything, evidence that the early Russian Marxists were right in dismissing the peasantry as an ineffectual instrument of revolutionary change (which was the opposite of the leftist mainstream at the time). As the support and participation of the peasantry in the revolution were far less important than that of the fledgling working class, and because the collectivisation measures were imposed on the peasantry by foreign actors (and not a product of their own self-emancipation), I don’t think the peasantry had that big of an influence in how horrific Communist rule was.
That being said, I really agree that comparing the state of the Russian Peasantry and Revolution to China’s peasant society and their revolution would be a really fruitful task. To my understanding, the Chinese Revolution was based almost entirely in peasant struggle, so the sociological factors in the frame of that class would be especially pertinent.
Hey, I wanted to clarify my thoughts on the concrete AI problem that is being solved here. No comment on the fantastic grant making/give-away scheme.
I don’t have much expertise on the mechanisms of the GPT-3 systems, but I wonder if there is a more efficient way in providing human comprehendible intermediaries that expose the workings of the algorithm.
My worry is that many of the annotated thoughts imputed by authors are irrelevant to the actual process of design the AI goes through to create it’s output. Asking the machine to produce a line of ‘thoughts’ alongside it’s final statement is fair-play, although this doesn’t seem to solve the problem of creating human comprehendible intermediaries, but instead gives the AI a pattern-matching/prediction task similar to what it goes through to create the original output. Wouldn’t it be the case that the ‘thoughts’ the machine creates serve no more effect on the process of calculation than the original output (prompt)?
This process still seems to be serve a rudimentary function of indirectly shedding more light on the processes of calculation, much the same as how a bigger prompt would. Yet puzzlingly, we in fact want to “get different sensible outputs by intervening on the thoughts”, which this indicates we expect thoughts to have a effect on the calculation of the final prompt. I suppose we could feed through the output for thoughts into the creation of the prompt, but my intuition suggests this would limit the complexity of the prompt by shackling it’s creation to an unnecessary component, the thought.
I say intuition because, again, I have little knowledge of this operation of this algorithm. Most of my musing here are just guesses!
That being said, it seems to me that another way of tackling this critical problem is by identifying the processes that the algorithm DOES use to create the output already, and then finding data that expresses those processes with human-compatible annotations. Instead of imposing another method of calculation in the form of Thoughts, maybe just make the existing method more comprehendible?
If I’m missing something frightfully obvious here, or just barking up the wrong tree please let me know where I’m going wrong!