I wonder if MIRI’s General Staff or Advisors deal with issues like this.
Your last point was interesting. I tried making a few, narrow comparisons with other fields that are important to people emotionally and physically i.e. cancer research and poverty charities. Upon a cursory glance, things like quacks, deceit and falsification seem present in these areas. So I suppose stuff like that’s possible in AI safety.
Though I guess the people involved in AI safety would try much harder to lock out people like that or publicly challange people who have no clue what they’re saying. However, its possible that some group might emerge that promotes shaky ideas which gain traction.
Though I think the scrutiny of those in the field and their judgements would cut down things like that.
I wonder if MIRI’s General Staff or Advisors deal with issues like this.
Your last point was interesting. I tried making a few, narrow comparisons with other fields that are important to people emotionally and physically i.e. cancer research and poverty charities. Upon a cursory glance, things like quacks, deceit and falsification seem present in these areas. So I suppose stuff like that’s possible in AI safety.
Though I guess the people involved in AI safety would try much harder to lock out people like that or publicly challange people who have no clue what they’re saying. However, its possible that some group might emerge that promotes shaky ideas which gain traction.
Though I think the scrutiny of those in the field and their judgements would cut down things like that.
By the way if OpenAI were suggested before Musk, it would likely be regarded as such shaky idea.
Many people do regard OpenAI as a shaky idea.
Do you mean the whole field of AI would regard OpenAI as a shaky idea before Musk, or just safety-conscious AI researchers?
I was speaking about safety researchers.
In that case, yeah, it’s still shaky, albeit less so than if Musk wasn’t involved.