I think there is a fair amount of overlap between the epistemic advantages of being a moderate (seeking incremental change from AI companies) and the epistemic disadvantages.
Many of the epistemic advantages come from being more grounded or having tighter feedback loops. If you’re trying to do the moderate reformer thing, you need to justify yourself to well-informed people who work at AI companies, you’ll get pushback from them, you’re trying to get through to them.
But those feedback loops are with that reality as interpreted by people at AI companies. So, to some degree, your thinking will get shaped to resemble their thinking. Those feedback loops will guide you towards relying on assumptions that they see as not requiring justification, using framings that resonate with them, accepting constraints that they see as binding, etc. Which will tend to lead to seeing the problem and the landscape from something more like their perspective, sharing their biases & blindspots, etc.
I think there is a fair amount of overlap between the epistemic advantages of being a moderate (seeking incremental change from AI companies) and the epistemic disadvantages.
Many of the epistemic advantages come from being more grounded or having tighter feedback loops. If you’re trying to do the moderate reformer thing, you need to justify yourself to well-informed people who work at AI companies, you’ll get pushback from them, you’re trying to get through to them.
But those feedback loops are with that reality as interpreted by people at AI companies. So, to some degree, your thinking will get shaped to resemble their thinking. Those feedback loops will guide you towards relying on assumptions that they see as not requiring justification, using framings that resonate with them, accepting constraints that they see as binding, etc. Which will tend to lead to seeing the problem and the landscape from something more like their perspective, sharing their biases & blindspots, etc.