Do the currently missing LLM abilities scale like pre-training, where each improvement requires spending 10x as much money?
Or do the currently missing abilities scale more like “reasoning”, where individual university groups could fine-tune an existing model for under $5,000 in GPU costs, and give it significant new abilities?
Or is the real situation somewhere in between?
Category (2) is what Bolstrom described as a “vulnerable world”, or a “recipe for ruin.” Also, not everyone believes that “alignment” will actually work for ASI. Under these assumptions, widely publishing detailed proposals in category (2) would seem unwise?
Also, even I believed that someone would figure out the necessary insights to build AGI, it still matters how quickly they do it. Given a choice of dying of cancer in 6 months or 12 (all other things being equal), I would pick 12.
(I really ought to make an actual discussion post on the right way to handle even “recipes for small-scale ruin.” After September 11th, this was a regular discussion among engineers and STEM types. It turns out that there are some truly nasty vulnerabilities that are known to experts, but that are not widely known to the public. If these vulnerabilities can be fixed, it’s usually better to publicize them. But what should you do if a vulnerability is fundamentally unfixable?)
One of my key concerns is the question of:
Do the currently missing LLM abilities scale like pre-training, where each improvement requires spending 10x as much money?
Or do the currently missing abilities scale more like “reasoning”, where individual university groups could fine-tune an existing model for under $5,000 in GPU costs, and give it significant new abilities?
Or is the real situation somewhere in between?
Category (2) is what Bolstrom described as a “vulnerable world”, or a “recipe for ruin.” Also, not everyone believes that “alignment” will actually work for ASI. Under these assumptions, widely publishing detailed proposals in category (2) would seem unwise?
Also, even I believed that someone would figure out the necessary insights to build AGI, it still matters how quickly they do it. Given a choice of dying of cancer in 6 months or 12 (all other things being equal), I would pick 12.
(I really ought to make an actual discussion post on the right way to handle even “recipes for small-scale ruin.” After September 11th, this was a regular discussion among engineers and STEM types. It turns out that there are some truly nasty vulnerabilities that are known to experts, but that are not widely known to the public. If these vulnerabilities can be fixed, it’s usually better to publicize them. But what should you do if a vulnerability is fundamentally unfixable?)