A few years ago, I already concluded that exposure to the wider culture is harmful and started partially isolating. X and Reddit and 4chan and TikTok seem to me to mess people up, via a combination of rage-bait, fake news, and plain-old optimization for engagement. It had not yet occurred to me that the problem is very likely to continue intensifying, although it probably should have; seems obvious in hindsight.
Isolation looked good for me, and looked like it fit into my picture of The Good. I do think it would be a shame if in general nobody would be able to freely associate safely with anyone outside their bubble ever again, though.
Thanks for pointing out this issue. As ever, I hope someone develops defensive tech that decisively settled the influence/cognitive-security arms race in favor of defense.
...which probably would end up with people preaching that the Earth is 6000 years old in the year 3000. Which seems bad. I don’t know what outcome is even desirable here.
So your position is that mankind should create a mechanism which grades content based on whether it pushes the users towards actual degradation or obvious falsehoods, not on whether it helps the user discover one’s actual preferences in some way. And protect the user from the former and not from the latter.
What I don’t understand is the following.
Is it possible that the list of obvious falsehoods ends up including something actually true? How could one prevent it?
We are to somehow tell apart discovered preferences and artificially induced ones. The questions like “What exactly are the latter preferences” and “what should be done with them” are likely a part of the ongiong culture war.
Yeah, I guess that’s what I was alluding to when I wrote “I don’t even know what the desirable outcome is here”; my intuitions seem to produce nigh-impossible requirements which suggest a confused ontology embedded in said intuitions.
Feels like there’s a problem out there, (increasingly more powerful influencing tech) but I haven’t a clue what to do with it.
A few years ago, I already concluded that exposure to the wider culture is harmful and started partially isolating. X and Reddit and 4chan and TikTok seem to me to mess people up, via a combination of rage-bait, fake news, and plain-old optimization for engagement. It had not yet occurred to me that the problem is very likely to continue intensifying, although it probably should have; seems obvious in hindsight.
Isolation looked good for me, and looked like it fit into my picture of The Good. I do think it would be a shame if in general nobody would be able to freely associate safely with anyone outside their bubble ever again, though.
Thanks for pointing out this issue. As ever, I hope someone develops defensive tech that decisively settled the influence/cognitive-security arms race in favor of defense.
...which probably would end up with people preaching that the Earth is 6000 years old in the year 3000. Which seems bad. I don’t know what outcome is even desirable here.
So your position is that mankind should create a mechanism which grades content based on whether it pushes the users towards actual degradation or obvious falsehoods, not on whether it helps the user discover one’s actual preferences in some way. And protect the user from the former and not from the latter.
What I don’t understand is the following.
Is it possible that the list of obvious falsehoods ends up including something actually true? How could one prevent it?
We are to somehow tell apart discovered preferences and artificially induced ones. The questions like “What exactly are the latter preferences” and “what should be done with them” are likely a part of the ongiong culture war.
Yeah, I guess that’s what I was alluding to when I wrote “I don’t even know what the desirable outcome is here”; my intuitions seem to produce nigh-impossible requirements which suggest a confused ontology embedded in said intuitions.
Feels like there’s a problem out there, (increasingly more powerful influencing tech) but I haven’t a clue what to do with it.