I have my own specific disagreements with EY’s foom argument, but it was difficult to find much anything in Yann’s arguments, tone, or strategy here to agree with, and EY presents the more cogent case.
Still, I’ll critique one of EY’s points here:
Sure, all of the Meta employees with spears could militarily overrun a lone defender with one spear. When it comes to scaling more cognitive tasks, Kasparov won the game of Kasparov vs. The World, where a massively parallel group of humans led by four grandmasters tried and failed to play chess good enough for Kasparov. Humans really scale very poorly, IMO.
Most real world economic tasks seem far closer to human army scaling rather than single chess game scaling. In the chess game example there is a very limited action space and only one move per time step. The real world isn’t like that—more humans can just take more actions per time step, so the scaling is much more linear like armies for many economic tasks like building electric vehicles or software empires. Sure there are some hard math/engineering challenges that have bottlenecks, but not really comparable to the chess example.
Sure, but I would say there’s depth and breadth dimensions here. A corporation can do some things N times faster than a single human, but “coming up with loopholes in corporate law to exploit” isn’t one of them, and that’s the closest analogy to an AGI being deceptive. The question of scaling is that the best case scenario is you take the sum, or even enhance the sum of individual efforts, but in worst case scenario you just take the maximum, which gives you “best human effort” performance but can’t go beyond.
I have my own specific disagreements with EY’s foom argument, but it was difficult to find much anything in Yann’s arguments, tone, or strategy here to agree with, and EY presents the more cogent case.
Still, I’ll critique one of EY’s points here:
Most real world economic tasks seem far closer to human army scaling rather than single chess game scaling. In the chess game example there is a very limited action space and only one move per time step. The real world isn’t like that—more humans can just take more actions per time step, so the scaling is much more linear like armies for many economic tasks like building electric vehicles or software empires. Sure there are some hard math/engineering challenges that have bottlenecks, but not really comparable to the chess example.
Sure, but I would say there’s depth and breadth dimensions here. A corporation can do some things N times faster than a single human, but “coming up with loopholes in corporate law to exploit” isn’t one of them, and that’s the closest analogy to an AGI being deceptive. The question of scaling is that the best case scenario is you take the sum, or even enhance the sum of individual efforts, but in worst case scenario you just take the maximum, which gives you “best human effort” performance but can’t go beyond.