I’m pretty sympathetic to the problem described by There’s No Fire Alarm for Artificial General Intelligence, but I think the claim that we’ve passed some sort of event horizon for self-improving systems is too strong. GPT-4 + Reflexion does not come even close to passing the bar of “improves upon GPT-4′s architecture better than the human developers already working on it”.
I’m pretty sympathetic to the problem described by There’s No Fire Alarm for Artificial General Intelligence, but I think the claim that we’ve passed some sort of event horizon for self-improving systems is too strong. GPT-4 + Reflexion does not come even close to passing the bar of “improves upon GPT-4′s architecture better than the human developers already working on it”.