I agree that Remmelt seems kind of like he has gone off the deep end
Could you be specific here?
You are sharing a negative impression (“gone off the deep end”), but not what it is based on. This puts me and others in a position of not knowing whether you are e.g. reacting with a quick broad strokes impression, and/or pointing to specific instances of dialogue that I handled poorly and could improve on, and/or revealing a fundamental disagreement between us.
For example, is it because on Twitter I spoke up against generative AI models that harm communities, and this seems somehow strategically bad? Do you not like the intensity of my messaging? Or do you intuitively disagree with my arguments about AGI being insufficiently controllable?
As is, this is dissatisfying. On this forum, I’d hope[1] there is a willingness to discuss differences in views first, before moving to broadcasting subjective judgements[2] about someone.
Even though that would be my hope, it’s no longer my expectation. There’s an unhealthy dynamic on this forum, where 3+ times I noticed people moving to sideline someone with unpopular ideas, without much care.
To give a clear example, someone else listed vaguely dismissive claims about research I support. Their comment lacked factual grounding but still got upvotes. When I replied to point out things they were missing, my reply got downvoted into the negative.
I guess this is a normal social response on most forums. It is naive of me to hope that on LessWrong it would be different.
This particularly needs to be done with care if the judgement is given by someone seen as having authority (because others will take it at face value), and if the judgement is guarding default notions held in the community (because that supports an ideological filter bubble).
I think many people have given you feedback. It is definitely not because of “strategic messaging”. It’s because you keep making incomprehensible arguments that don’t make any sense and then get triggered when anyone tries to explain why they don’t make sense, while making statements that are wrong with great confidence.
As is, this is dissatisfying. On this forum, I’d hope[1] there is a willingness to discuss differences in views first, before moving to broadcasting subjective judgements[2] about someone.
People have already spent many hours giving you object-level feedback on your views. If this still doesn’t meet the relevant threshold for then moving on and discussing judgements, then basically no one can ever be judged (and as such our community would succumb to eternal september and die).
It’s because you keep making incomprehensible arguments that don’t make any sense
Good to know that this is why you think AI Safety Camp is not worth funding.
Once a core part of the AGI non-safety argument is put into maths to be comprehensible for people in your circle, it’d be interesting to see how you respond then.
Could you be specific here?
You are sharing a negative impression (“gone off the deep end”), but not what it is based on. This puts me and others in a position of not knowing whether you are e.g. reacting with a quick broad strokes impression, and/or pointing to specific instances of dialogue that I handled poorly and could improve on, and/or revealing a fundamental disagreement between us.
For example, is it because on Twitter I spoke up against generative AI models that harm communities, and this seems somehow strategically bad? Do you not like the intensity of my messaging? Or do you intuitively disagree with my arguments about AGI being insufficiently controllable?
As is, this is dissatisfying. On this forum, I’d hope[1] there is a willingness to discuss differences in views first, before moving to broadcasting subjective judgements[2] about someone.
Even though that would be my hope, it’s no longer my expectation. There’s an unhealthy dynamic on this forum, where 3+ times I noticed people moving to sideline someone with unpopular ideas, without much care.
To give a clear example, someone else listed vaguely dismissive claims about research I support. Their comment lacked factual grounding but still got upvotes. When I replied to point out things they were missing, my reply got downvoted into the negative.
I guess this is a normal social response on most forums. It is naive of me to hope that on LessWrong it would be different.
This particularly needs to be done with care if the judgement is given by someone seen as having authority (because others will take it at face value), and if the judgement is guarding default notions held in the community (because that supports an ideological filter bubble).
I think many people have given you feedback. It is definitely not because of “strategic messaging”. It’s because you keep making incomprehensible arguments that don’t make any sense and then get triggered when anyone tries to explain why they don’t make sense, while making statements that are wrong with great confidence.
People have already spent many hours giving you object-level feedback on your views. If this still doesn’t meet the relevant threshold for then moving on and discussing judgements, then basically no one can ever be judged (and as such our community would succumb to eternal september and die).
Good to know that this is why you think AI Safety Camp is not worth funding.
Once a core part of the AGI non-safety argument is put into maths to be comprehensible for people in your circle, it’d be interesting to see how you respond then.