For example, there’s a general historical metapattern that it takes more and more resources to learn more about the universe.
This is one of the strongest arguments I’ve ever heard against FOOM. But if we can get an AI up to the level of one moderately-smart scientist, horizontal scaling makes it a million scientists working at 1000x the human rate without any problems with coordination and akrasia, which sounds extremely scary.
This is one of the strongest arguments I’ve ever heard against FOOM. But if we can get an AI up to the level of one moderately-smart scientist, horizontal scaling makes it a million scientists working at 1000x the human rate without any problems with coordination and akrasia, which sounds extremely scary.