What level of “potential” is required here? A human baby has a certain amount of potential to reach whatever threshold you’re comparing it against—if it’s fed, kept warm, not killed, etc. A pig also has a certain level of potential—if we tweak its genetics.
If we develop AI, then any given pile of sand has just as much potential to reach “human level” as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).
Your proposed category—“can develop to contain morally relevant quantity X”—tends to fail along similar edge cases as whatever morally relevant quality it’s replacing.
Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant’s Categorical Imperative. I’m re-reading Kant right now (it’s been half a decade), but my primary recollection was that the categorical imperative boiled down to “make decisions not on your own behalf, but as though you decided for all rational agents in your situation.”
Some related criticisms of EDT are weirdly reminiscent of Kant’s critiques of other moral systems based on predicting the outcome of your actions. “Weirdly reminiscent of” rather than “reinventing” intentionally, but I try not to be too quick to dismiss older thinkers.