A glider flies, but without self-propulsion it doesn’t go very far. Would seeing a glider land before traveling long distance update you against the possibility of fixed-wing flight working? It might, but it needn’t. Someone comes along and adds an engine and propeller and all of a sudden the thing can really fly. With the addition of one extra component you update all the way to fixed-wing flight works.
It’s the same thing here. Maybe these current systems are relatively good analogues of what will later be RSI-ing AGI and all they’re missing right now is an engine and propeller. If someone comes along and adds a propeller and engine and gets them really flying in some basic way, then it’s perfectly reasonable to update toward that possibility.
(Someone please correct me if my logic is wrong here.)
If I had never seen a glider before, I would think there was a nonzero chance that it could travel a long distance without self-propulsion. So if someone runs the experiment of “see if you can travel a long distance with a fixed wing glider and no other innovations”, I could either observe that it works, or observe that it doesn’t.
If you can travel a long distance without propulsion, that obviously updates me very far in the direction of “fixed-wing flight works”.
So by conservation of expected evidence, observing that a glider with no propulsion doesn’t make it very far has to update me at least slightly in the direction of “fixed-wing flight does not work”. Because otherwise I would expect to update in the direction of “fixed-wing flight works” no matter what observation I made.
Note that OP said “does not update me at all” not “does not update me very much”—and the use of the language “update me” implies the strong “in a bayesian evidence sense” meaning of the words—this is not a nit I would have picked if OP had said “I don’t find the failures of autogpt and friends to self-improve to be at all convincing that RSI is impossible”.
A glider flies, but without self-propulsion it doesn’t go very far. Would seeing a glider land before traveling long distance update you against the possibility of fixed-wing flight working? It might, but it needn’t. Someone comes along and adds an engine and propeller and all of a sudden the thing can really fly. With the addition of one extra component you update all the way to fixed-wing flight works.
It’s the same thing here. Maybe these current systems are relatively good analogues of what will later be RSI-ing AGI and all they’re missing right now is an engine and propeller. If someone comes along and adds a propeller and engine and gets them really flying in some basic way, then it’s perfectly reasonable to update toward that possibility.
(Someone please correct me if my logic is wrong here.)
If I had never seen a glider before, I would think there was a nonzero chance that it could travel a long distance without self-propulsion. So if someone runs the experiment of “see if you can travel a long distance with a fixed wing glider and no other innovations”, I could either observe that it works, or observe that it doesn’t.
If you can travel a long distance without propulsion, that obviously updates me very far in the direction of “fixed-wing flight works”.
So by conservation of expected evidence, observing that a glider with no propulsion doesn’t make it very far has to update me at least slightly in the direction of “fixed-wing flight does not work”. Because otherwise I would expect to update in the direction of “fixed-wing flight works” no matter what observation I made.
Note that OP said “does not update me at all” not “does not update me very much”—and the use of the language “update me” implies the strong “in a bayesian evidence sense” meaning of the words—this is not a nit I would have picked if OP had said “I don’t find the failures of autogpt and friends to self-improve to be at all convincing that RSI is impossible”.