Is it on the AI safety forum, though? Turns out it is, though downvoted...
This is an interesting question, whatever your political bend—there is a noticeable uptick in representation etc. in new media. It’s worth understanding the underlying mechanisms at work, seeing as whatever the reason for the changes, they happened quite fast. Both if you’re for such changes or against them.
Is it on the AI safety forum, though? Turns out it is, though downvoted...
Oh! That was totally unintentional. I didn’t know that could even happen! I don’t even have an AI safety forum account as far as I know. I honestly thought I was just asking Less Wrong.
It’s an old holdover where any LW url can be turned into an AF URL (because AF is just a subset of LW content). None of it appears on the Alignment Forum frontpage, and it won’t show up in search on the AF site, but still, it’s been on my mind as worth getting rid of for a while...
Looks like the forum software has edited your alignmentforum link to point to lesswrong.com. The text of your link says “alignmentforum”, and you can copy-paste it into the URL bar to get there, but if you mouse over the link or click it, it takes you to the lesswrong.com post. Fascinating.
I noticed I was confused. The world didn’t make sense to me at this spot. I could guess at some pieces, like “Okay, maybe wokism is actually just really super popular”, but that didn’t account for all the pieces I was observing.
I imagined that Less Wrong would be a good place to ask people about this in a way relatively unlikely to swing into culture war baloney. I just want to understand how the world is shaped.
why is it worth a frontpage on the ai safety forum?
I… have no idea. I didn’t do that. Or if I did it was purely by accident. I wouldn’t have guessed this belonged at all in anything having to do with AI risk, other than it being about modeling the world, which is generically connected to AI risk in an overall kind of way.
Oh! Ha! Okay. Well, I view Less Wrong as the rationality forum of the world, which happens to include a lot of examination of AI safety/risk. If there were a division within LW between “AI” and “not AI”, I totally would have put this in the “not AI” category.
The question “why do companies do something seemingly unprofitable” is in my opinion worth asking.
The answers seem to be one of:
it actually is profitable, because...
a principal-agent problem, the people doing the thing are not aligned with the company (and the company will not replace them, because...)
Both seem likely, I wish I could figure out which one is true (possibly both).
Is it on the AI safety forum, though? Turns out it is, though downvoted...
This is an interesting question, whatever your political bend—there is a noticeable uptick in representation etc. in new media. It’s worth understanding the underlying mechanisms at work, seeing as whatever the reason for the changes, they happened quite fast. Both if you’re for such changes or against them.
Oh! That was totally unintentional. I didn’t know that could even happen! I don’t even have an AI safety forum account as far as I know. I honestly thought I was just asking Less Wrong.
It’s an old holdover where any LW url can be turned into an AF URL (because AF is just a subset of LW content). None of it appears on the Alignment Forum frontpage, and it won’t show up in search on the AF site, but still, it’s been on my mind as worth getting rid of for a while...
Hmm. That link goes to LessWrong for me—this is the AI forum one: https://www.alignmentforum.org/posts/AajbPPe4EomHcszkb/where-s-the-economic-incentive-for-wokism-coming-from
Intereesting
Looks like the forum software has edited your alignmentforum link to point to lesswrong.com. The text of your link says “alignmentforum”, and you can copy-paste it into the URL bar to get there, but if you mouse over the link or click it, it takes you to the lesswrong.com post. Fascinating.
Happy to answer.
I noticed I was confused. The world didn’t make sense to me at this spot. I could guess at some pieces, like “Okay, maybe wokism is actually just really super popular”, but that didn’t account for all the pieces I was observing.
I imagined that Less Wrong would be a good place to ask people about this in a way relatively unlikely to swing into culture war baloney. I just want to understand how the world is shaped.
I… have no idea. I didn’t do that. Or if I did it was purely by accident. I wouldn’t have guessed this belonged at all in anything having to do with AI risk, other than it being about modeling the world, which is generically connected to AI risk in an overall kind of way.
“Culture war baloney”
The modern history of the term “culture war” is interesting. I suspect it might make enlightening reading.
Oh! Ha! Okay. Well, I view Less Wrong as the rationality forum of the world, which happens to include a lot of examination of AI safety/risk. If there were a division within LW between “AI” and “not AI”, I totally would have put this in the “not AI” category.