Mandating open source leads to dangers described by @Alvin Ånestrand in the Rogue Replication Scenario. If an open-sourced model can be finetuned by terrorists, then mankind makes a dangerous mistake.
The point 2) confuses me because I don’t understand who is to write code and how researchers are to prevented from, say, vibe-coding simple experiments like benchmarking capable architectures on simple tasks. And what if someone uses Agent-4 and OpenBrain watches it gain root access from the outside?
The point 4) is indeed a good way to inform humanity about the rate of progress, and 1) slows mankind down, but requires international coordination.
Alas, measures that can actually slow AI research down are especially hard to lobby for if the economy of the USA is in big trouble, since some people in the USG might decide to race against the troubles. I covered this point in the many footnotes to my take at modifying the AI-2027 scenario.
Mandating open source leads to dangers described by @Alvin Ånestrand in the Rogue Replication Scenario. If an open-sourced model can be finetuned by terrorists, then mankind makes a dangerous mistake.
The point 2) confuses me because I don’t understand who is to write code and how researchers are to prevented from, say, vibe-coding simple experiments like benchmarking capable architectures on simple tasks. And what if someone uses Agent-4 and OpenBrain watches it gain root access from the outside?
The point 4) is indeed a good way to inform humanity about the rate of progress, and 1) slows mankind down, but requires international coordination.
Alas, measures that can actually slow AI research down are especially hard to lobby for if the economy of the USA is in big trouble, since some people in the USG might decide to race against the troubles. I covered this point in the many footnotes to my take at modifying the AI-2027 scenario.