If you have an alternate theory of the likely form of first takeover-capable AGI, I’d love to hear it!
I’m not claiming anything about the first takeover-capable AGI, and I’m not claiming it won’t be LLM-based. I’m just saying that there’s a specific reasoning step that you’re using a lot (current tech has property X, therefore AGI has property almost-X) which I think is invalid (when X is entangled with properties of AGI that LLMs don’t currently have).
Maybe a slightly insulting analogy (sorry): That type of reasoning looks a lot like bad scifi ideas about AI, where people reason like “AI is a program on a computer, programs on computers can’t do {intuition, fuzzy reasoning, logical paradoxes, emotion}, therefore AI will be {logical, calculator-like, vulnerable to paradoxes, not understand emotion, etc.}”. The reasoning step doesn’t work, because it’s focusing on the “logical program” part over the “AGI” part. I think you’re focusing too much on the “LLM-based” part of “LLM-based AGI”, even in cases where the “AGI” part tells you much more.
(We’re having two similar discussions in parallel, so I’m responding to this in a way that might be useful to other people, but I don’t expect it to be useful to you, since I’ve already said this in the other discussion).
There are a lot of merits to avoiding unnecessary premises when they might be wrong.
There are also a lot of merits for reasoning from premises when they allow more progress, and they’re likely to be correct. That is, of course, what I’m trying to do here.
Which of these factors is larger has to be evaluated on the specific instances. There’s lots more to be said about those in this case, but I don’t have time to dig into it now, and it’s worth a full post and discussion.
I’m not claiming anything about the first takeover-capable AGI, and I’m not claiming it won’t be LLM-based. I’m just saying that there’s a specific reasoning step that you’re using a lot (current tech has property X, therefore AGI has property almost-X) which I think is invalid (when X is entangled with properties of AGI that LLMs don’t currently have).
Maybe a slightly insulting analogy (sorry): That type of reasoning looks a lot like bad scifi ideas about AI, where people reason like “AI is a program on a computer, programs on computers can’t do {intuition, fuzzy reasoning, logical paradoxes, emotion}, therefore AI will be {logical, calculator-like, vulnerable to paradoxes, not understand emotion, etc.}”. The reasoning step doesn’t work, because it’s focusing on the “logical program” part over the “AGI” part. I think you’re focusing too much on the “LLM-based” part of “LLM-based AGI”, even in cases where the “AGI” part tells you much more.
(We’re having two similar discussions in parallel, so I’m responding to this in a way that might be useful to other people, but I don’t expect it to be useful to you, since I’ve already said this in the other discussion).
There are a lot of merits to avoiding unnecessary premises when they might be wrong.
There are also a lot of merits for reasoning from premises when they allow more progress, and they’re likely to be correct. That is, of course, what I’m trying to do here.
Which of these factors is larger has to be evaluated on the specific instances. There’s lots more to be said about those in this case, but I don’t have time to dig into it now, and it’s worth a full post and discussion.