I mostly agree that this model is largely right, with the caveat that something like “power” seems to me like a poset, rather than a scalar value, and when there are “incomparable”[1] ways to increase power, the identity/self-model of the Pythian entity can break the tie. Metaphorically, the null space of power maximization gives elbow room to the non-Pythian factor. In other words, “power maximization” is underdetermined, so there’s room for other factors to influence the development of a power-maximizing thing non-chaotically.
So far as I can currently recall, every single time an AI company promises that they’ll do an expensive safe thing later, they renege as soon as the bill comes due.
One single exception: Demis Hassabis turning down higher offers for Deepmind to go with Google and an ethics board. In this case, of course, Google just fucked him on the ethics board promises; but Demis himself did keep to his way.
I mostly agree that this model is largely right, with the caveat that something like “power” seems to me like a poset, rather than a scalar value, and when there are “incomparable”[1] ways to increase power, the identity/self-model of the Pythian entity can break the tie. Metaphorically, the null space of power maximization gives elbow room to the non-Pythian factor. In other words, “power maximization” is underdetermined, so there’s room for other factors to influence the development of a power-maximizing thing non-chaotically.
https://x.com/allTheYud/status/2026593546241978709
Or just ~equal because it’s very unclear which one will grant more power.