“Not optimized to be convincing to AI researchers” ≠ “looks like fraud”. “Optimized to be convincing to policymakers” might involve research that clearly demonstrates some property of AIs/ML models which is basic knowledge for capability researchers (and for which they already came up with rationalizations why it’s totally fine) but isn’t well-known outside specialist circles.
E. g., the basic example is the fact that ML models are black boxes trained by an autonomous process which we don’t understand, instead of manually coded symbolic programs. This isn’t as well-known outside ML communities as one might think, and non-specialists are frequently shocked when they properly understand that fact.
What kind of “research” would demonstrate that ML models are not the same as manually coded programs? Why not just link to the Wikipedia article for “machine learning”?
“Not optimized to be convincing to AI researchers” ≠ “looks like fraud”. “Optimized to be convincing to policymakers” might involve research that clearly demonstrates some property of AIs/ML models which is basic knowledge for capability researchers (and for which they already came up with rationalizations why it’s totally fine) but isn’t well-known outside specialist circles.
E. g., the basic example is the fact that ML models are black boxes trained by an autonomous process which we don’t understand, instead of manually coded symbolic programs. This isn’t as well-known outside ML communities as one might think, and non-specialists are frequently shocked when they properly understand that fact.
What kind of “research” would demonstrate that ML models are not the same as manually coded programs? Why not just link to the Wikipedia article for “machine learning”?