I don’t understand this comment. I usually don’t think of “building a safer LLM agent” as a viable route to aligned AI. My current best guess about how to create aligned AI is Physicalist Superimitation. We can imagine other approaches, e.g. Quantilized Debate, but I am less optimistic there. More importantly, I believe that we need to complete the theory of agents first, before we can have strong confidence about which approaches are more promising.
As to heuristic implementations of infra-Bayesianism, this is something I don’t want to speculate about in public, it seems exfohazardous.
I usually don’t think of “building a safer LLM agent” as a viable route to aligned AI
I agree that building a safer LLM agent is an incredibly fraught path that probably doesn’t work. My comment is in the context of Abram’s first approach, developing safer AI tech that companies might (apparently voluntarily) switch to, and specifically the route of scaling up IB to compete with LLM agents. Note that Abram also seems to be discussing the AI 2027 report, which if taken seriously requires all of this to be done in about 2 years. Conditioning on this route, I suggest that most realistic paths look like what I described, but I am pretty pessimistic that this route will actually work. The reason is that I don’t see explicitly Bayesian glass-box methods competing with massive black-box models at tasks like natural language prediction any time soon. But who knows, perhaps with the “true” (IB?) theory of agency in hand much more is possible.
More importantly, I believe that we need to complete the theory of agents first, before we can have strong confidence about which approaches are more promising.
I’m not sure it’s possible to “complete” the theory of agents, and I am particularly skeptical that we can do it any time soon. However, I think we agree locally / directionally, because it also seems to me that a more rigorous theory of agency is necessary for alignment.
As to heuristic implementations of infra-Bayesianism, this is something I don’t want to speculate about in public, it seems exfohazardous.
Fair enough, but in that case, it seems impossible for this conversation to meaningfully progress here.
I think that in 2 years we’re unlikely to accomplish anything that leaves a dent in P(DOOM), with any method, but I also think it’s more likely than not that we actually have >15 years.
As to “completing” the theory of agents, I used the phrase (perhaps perversely) in the same sense that e.g. we “completed” the theory of information: the latter exists and can actually be used for its intended applications (communication systems). Or at least in the sense we “completed” the theory of computational complexity: even though a lot of key conjectures are still unproven, we do have a rigorous understanding of what computational complexity is and know how to determine it for many (even if far from all) problems of interest.
I probably should have said “create” rather than “complete”.
(Summoned by @Alexander Gietelink Oldenziel)
I don’t understand this comment. I usually don’t think of “building a safer LLM agent” as a viable route to aligned AI. My current best guess about how to create aligned AI is Physicalist Superimitation. We can imagine other approaches, e.g. Quantilized Debate, but I am less optimistic there. More importantly, I believe that we need to complete the theory of agents first, before we can have strong confidence about which approaches are more promising.
As to heuristic implementations of infra-Bayesianism, this is something I don’t want to speculate about in public, it seems exfohazardous.
I agree that building a safer LLM agent is an incredibly fraught path that probably doesn’t work. My comment is in the context of Abram’s first approach, developing safer AI tech that companies might (apparently voluntarily) switch to, and specifically the route of scaling up IB to compete with LLM agents. Note that Abram also seems to be discussing the AI 2027 report, which if taken seriously requires all of this to be done in about 2 years. Conditioning on this route, I suggest that most realistic paths look like what I described, but I am pretty pessimistic that this route will actually work. The reason is that I don’t see explicitly Bayesian glass-box methods competing with massive black-box models at tasks like natural language prediction any time soon. But who knows, perhaps with the “true” (IB?) theory of agency in hand much more is possible.
I’m not sure it’s possible to “complete” the theory of agents, and I am particularly skeptical that we can do it any time soon. However, I think we agree locally / directionally, because it also seems to me that a more rigorous theory of agency is necessary for alignment.
Fair enough, but in that case, it seems impossible for this conversation to meaningfully progress here.
I think that in 2 years we’re unlikely to accomplish anything that leaves a dent in P(DOOM), with any method, but I also think it’s more likely than not that we actually have >15 years.
As to “completing” the theory of agents, I used the phrase (perhaps perversely) in the same sense that e.g. we “completed” the theory of information: the latter exists and can actually be used for its intended applications (communication systems). Or at least in the sense we “completed” the theory of computational complexity: even though a lot of key conjectures are still unproven, we do have a rigorous understanding of what computational complexity is and know how to determine it for many (even if far from all) problems of interest.
I probably should have said “create” rather than “complete”.
I agree with all of this.