Is GPT4-chan harmful, and how? The crux of this question comes down to, I think, whether mere words can be harmful. This obviously relates to the culture war around ‘censorship’ on Twitter and elsewhere. With mainstream social media, we also have an ancillary debate over whether the preeminent public spaces should be as wild-west as is permissible anywhere (most people don’t want to live on 4-chan), but this case is clarifying: those who think GPT4-chan is harmful have to make the case that people who opt-in for the most offensive content are still being harmed (consentually?), or that the mere existence of 4-chan is harming society as a whole.
I bring this up not to litigate the culture war (this is obviously not the forum for that) but because there is an analogy to AI hacking, which plays a prominent role in the debate around AI risk. Consider two worlds. In one world, IT is significantly more secure, with mathematically proven operating systems etc, but there is also rampant attempts at hacking with hackers rarely getting punished, since hackers are seen as free red-teaming. Hackers try but don’t do much damage because the infrastructure is already highly secure. In the alternative world, there is significant government controls which impede or dissuade people from attempting to hack anything, so hacking is rare. How secure the IT infrastructure actually is remains unknown, but presumably it is highly insecure. I would suggest that the second world is much more vulnerable, to AI and in general. Back in the real world, we have to deal with the reality that our IT infrastructure is very insecure and will not improve any time soon, so we cannot afford to just unleash the most malicious AIs available and expect firewalls and ACLs to do their job. I would prefer to move towards the first world, without doubling down on government controls more than is necessary.
In case the analogy is not obvious, GPT4-chan as well as ‘Russian disinformation’ are seen as a kind of hacking of our political dialectic, and the question is how vulnerable our brains are. My view is that human society naturally already has many defense mechanisms, given that we’ve been ‘hacking’ each others’ brains for thousands of years. [Meta: I worry that this is hard to discuss without actually getting into the culture war itself, which I very much do not want to do. Mods please take appropriate actions. If asked to delete this comment, I will.]
Is GPT4-chan harmful, and how? The crux of this question comes down to, I think, whether mere words can be harmful. This obviously relates to the culture war around ‘censorship’ on Twitter and elsewhere. With mainstream social media, we also have an ancillary debate over whether the preeminent public spaces should be as wild-west as is permissible anywhere (most people don’t want to live on 4-chan), but this case is clarifying: those who think GPT4-chan is harmful have to make the case that people who opt-in for the most offensive content are still being harmed (consentually?), or that the mere existence of 4-chan is harming society as a whole.
I bring this up not to litigate the culture war (this is obviously not the forum for that) but because there is an analogy to AI hacking, which plays a prominent role in the debate around AI risk. Consider two worlds. In one world, IT is significantly more secure, with mathematically proven operating systems etc, but there is also rampant attempts at hacking with hackers rarely getting punished, since hackers are seen as free red-teaming. Hackers try but don’t do much damage because the infrastructure is already highly secure. In the alternative world, there is significant government controls which impede or dissuade people from attempting to hack anything, so hacking is rare. How secure the IT infrastructure actually is remains unknown, but presumably it is highly insecure. I would suggest that the second world is much more vulnerable, to AI and in general. Back in the real world, we have to deal with the reality that our IT infrastructure is very insecure and will not improve any time soon, so we cannot afford to just unleash the most malicious AIs available and expect firewalls and ACLs to do their job. I would prefer to move towards the first world, without doubling down on government controls more than is necessary.
In case the analogy is not obvious, GPT4-chan as well as ‘Russian disinformation’ are seen as a kind of hacking of our political dialectic, and the question is how vulnerable our brains are. My view is that human society naturally already has many defense mechanisms, given that we’ve been ‘hacking’ each others’ brains for thousands of years. [Meta: I worry that this is hard to discuss without actually getting into the culture war itself, which I very much do not want to do. Mods please take appropriate actions. If asked to delete this comment, I will.]