Ah yes, I see what you mean. This seems like trivial semantic nitpicking to me, but I will go ahead and update the wording of the sentence to allow for the fact that I had some tiny amount of belief that a very crude AutoGPT approach would work and thus seeing it not immediately work means that my overall beliefs were infinitesimally altered by this.
Yeah. I had thought that you used the wording “don’t update me at all” instead of “aren’t at all convincing to me” because you meant something precise that was not captured by the fuzzier language. But on reflection it’s probably just that language like “updating” is part of the vernacular here now.
Sorry, I had meant that to be a one-off side note, not a whole thing.
The bit I actually was surprised by was that you seem to think there was very little chance that the crude approach could have worked. In my model of the world, “the simplest thing that could possibly work” ends up working a substantial amount of the time. If your model of the world says the approach of “just piling more hacks and heuristics on top of AutoGPT-on-top-of-GPT4 will get it to the point where it can come up with additional helpful hacks and heuristics that further improve its capabilities” almost certainly won’t work that’s a bold and interesting advance prediction in my book.
MY guess at whether GPT-4 can self-improve at all with a lot of carefully engineered external systems and access to its own source code and weights is a great deal higher than that AutoGPT would self-improve. The failure of AutoGPT says nothing[1] to me about that.
Ah yes, I see what you mean. This seems like trivial semantic nitpicking to me, but I will go ahead and update the wording of the sentence to allow for the fact that I had some tiny amount of belief that a very crude AutoGPT approach would work and thus seeing it not immediately work means that my overall beliefs were infinitesimally altered by this.
Yeah. I had thought that you used the wording “don’t update me at all” instead of “aren’t at all convincing to me” because you meant something precise that was not captured by the fuzzier language. But on reflection it’s probably just that language like “updating” is part of the vernacular here now.
Sorry, I had meant that to be a one-off side note, not a whole thing.
The bit I actually was surprised by was that you seem to think there was very little chance that the crude approach could have worked. In my model of the world, “the simplest thing that could possibly work” ends up working a substantial amount of the time. If your model of the world says the approach of “just piling more hacks and heuristics on top of AutoGPT-on-top-of-GPT4 will get it to the point where it can come up with additional helpful hacks and heuristics that further improve its capabilities” almost certainly won’t work that’s a bold and interesting advance prediction in my book.
MY guess at whether GPT-4 can self-improve at all with a lot of carefully engineered external systems and access to its own source code and weights is a great deal higher than that AutoGPT would self-improve. The failure of AutoGPT says nothing[1] to me about that.
In the usual sense of not being anywhere near worth the effort to include it in any future credences.