Edit: this comment is no longer relevant because the text it talks about was removed.
What do you mean by “I am not sure OpenPhil should have funded these guys”? Edit for context: OpenPhil funded Epoch where they previously worked, but hasn’t funded Mechanize where they currently work. Are you joking? Do you think it’s bad to fund organizations that do useful work (or that you think will do useful work) but which employ people who have beliefs that might make them do work that you think is net negative? Do you have some more narrow belief about pressure OpenPhil should be applying to organizations that are trying to be neutral/trusted?
I think it’s probably bad to say stuff (at least on LessWrong) like “I am not sure OpenPhil should have funded these guys” (the image is fine satire I guess) because this seems like the sort of thing which yields tribal dynamics and negative polarization. When criticizing people, I think it’s good to be clear and specific. I think “humor which criticizes people” is maybe fine, but I feel like this can easily erode epistemics because it is hard to respond to. I think “ambiguity about whether this criticism is humor / meant literally etc” is much worse (and common on e.g. X/twitter).
Tbh, I just needed some text before the image. But I have negative sentiment for both Epoch and OpenPhil. From my perspective, creating benchmarks to measure things adjacent to RSI is likely net negative, and teetering on the edge of gain of function. Such measures so often become a target. And it should not come as surprise, I think, that key Epoch people just went off and activity started working on this stuff.
As in, you think frontier math is (strongly) net negative for this reason? Or you think Epoch was going to do work along these lines that you think is net negative?
I mean, not super impressed with their relationship with OpenAI re: frontier math. The org has a bad smell to me, but I won’t claim to know the whole of what they’ve done.
I don’t necessarily disagree with what you literally wrote. But also, at a more pre-theoretic level, IMO the sequence of events here should be really disturbing (if you haven’t already been disturbed by other similar sequences of events). And I don’t know what to do with that disturbedness, but “just feel disturbed and don’t do anything” also doesn’t seem right. (Not that you said that.)
I think literally just prepending “Humor:” or “Mostly joking:” would make me think this was basically fine. Or like a footnote saying “mostly a joke / not justifying this here”. Like it’s just good to be clear about what is non-argued for humor vs what is a serious criticism (and if it is a serious criticism of this sort, then I think we should have reasonably high standards for this, e.g. like the questions in my comment are relevant).
Idk what the overall policy for lesswrong should be, but this sort of thing does feel scary to me and worth being on guard about.
Edit: this comment is no longer relevant because the text it talks about was removed.
What do you mean by “I am not sure OpenPhil should have funded these guys”? Edit for context: OpenPhil funded Epoch where they previously worked, but hasn’t funded Mechanize where they currently work. Are you joking? Do you think it’s bad to fund organizations that do useful work (or that you think will do useful work) but which employ people who have beliefs that might make them do work that you think is net negative? Do you have some more narrow belief about pressure OpenPhil should be applying to organizations that are trying to be neutral/trusted?
I think it’s probably bad to say stuff (at least on LessWrong) like “I am not sure OpenPhil should have funded these guys” (the image is fine satire I guess) because this seems like the sort of thing which yields tribal dynamics and negative polarization. When criticizing people, I think it’s good to be clear and specific. I think “humor which criticizes people” is maybe fine, but I feel like this can easily erode epistemics because it is hard to respond to. I think “ambiguity about whether this criticism is humor / meant literally etc” is much worse (and common on e.g. X/twitter).
Tbh, I just needed some text before the image. But I have negative sentiment for both Epoch and OpenPhil. From my perspective, creating benchmarks to measure things adjacent to RSI is likely net negative, and teetering on the edge of gain of function. Such measures so often become a target. And it should not come as surprise, I think, that key Epoch people just went off and activity started working on this stuff.
As in, you think frontier math is (strongly) net negative for this reason? Or you think Epoch was going to do work along these lines that you think is net negative?
I mean, not super impressed with their relationship with OpenAI re: frontier math. The org has a bad smell to me, but I won’t claim to know the whole of what they’ve done.
I don’t necessarily disagree with what you literally wrote. But also, at a more pre-theoretic level, IMO the sequence of events here should be really disturbing (if you haven’t already been disturbed by other similar sequences of events). And I don’t know what to do with that disturbedness, but “just feel disturbed and don’t do anything” also doesn’t seem right. (Not that you said that.)
To clarify my perspective a bit here:
I think literally just prepending “Humor:” or “Mostly joking:” would make me think this was basically fine. Or like a footnote saying “mostly a joke / not justifying this here”. Like it’s just good to be clear about what is non-argued for humor vs what is a serious criticism (and if it is a serious criticism of this sort, then I think we should have reasonably high standards for this, e.g. like the questions in my comment are relevant).
Idk what the overall policy for lesswrong should be, but this sort of thing does feel scary to me and worth being on guard about.
edit: I just removed it instead.
But it couldn’t be a serious criticism; the Necronomicon hasn’t actually been discovered.