“If illegible safety problems remain when we invent transformative AI, legible problems mostly just give an excuse to deploy it”
“Legible safety problems mostly just burn timeline in the presence of illegible problems”
Something like that
“If illegible safety problems remain when we invent transformative AI, legible problems mostly just give an excuse to deploy it”
“Legible safety problems mostly just burn timeline in the presence of illegible problems”
Something like that