No, we need to maintain diversity and enough decentralization (e.g. because of the need for resilience against a single missile nuclear attack (a single super-valuable site is very vulnerable)).
Moreover, since “AI existential safety” is a preparadigmatic field, it is particularly important to be able to explore various non-standard ideas (government control, and, especially, military control is not conductive in this sense).
There is a much cheaper idea (which still needs some money and, more crucially, needs some extremely strong organizational skills): promote creation of small few-person short-term collaborations exploring various non-standard ideas in “AI existential safety” (basically, let’s have a lot of brainstorms, let’s have infrastructure to support those brainstorms and to support sharing their fruits, let’s figure out how to really make rapid progress here in a fashion which is a network and not a hierarchy (the problem is probably too difficult to be rapidly solved by a hierarchy)).
That would generate novel ideas. (We just need to keep in mind that anything which actually works in “AI alignment” (or, more generally, in any approach to “AI existential safety”, whether alignment-based or not) is highly likely to be dual use and a major capability booster. No one knows how to handle this correctly.)
we don’t seem to have Von Neumanns anymore
Oh, but we do. For example (and despite my misgivings about OpenAI current approach to alignment), Ilya Sutskever is of that caliber (AlexNet, GPT-3, GPT-4, and a lot of other remarkable results speak for themselves). And now he will focus on alignment.
That being said, genius scientists are great for doing genius things, but not necessarily the best policy decision makers (e.g. von Neumann strongly advocated preventive nuclear attack against Soviet Union, and Oppenheimer after opposing thermonuclear weapons actually suggested to use an early prototype thermonuclear device in the Korean war).
So technical research and technical solutions is one thing, but decision-making and security-keeping is very different (seem to require very different people with very different skills).
I agree that more diverse orgs is good, heck I’m trying to do that on at least 1-2 fronts rn.
I’m not as up-to-date on key AI-researcher figures as I prolly should be, but big-if-true is Ilya is really JVN-level and is doing alignment and works at OpenAI, that’s a damn good combo for at least somebody to have.
Yes, assuming the first sentence of their overall approach is not excessively straightforward
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us
It might be that a more subtle approach than “steer and control AI systems much smarter than us” is needed. (But also, they might be open to all kinds of pivoting on this.)
:-) Well, Ilya is not from Hungary, he has been born in Gorky, but otherwise he is a total and obvious first-rate Martian :-)
No, we need to maintain diversity and enough decentralization (e.g. because of the need for resilience against a single missile nuclear attack (a single super-valuable site is very vulnerable)).
Moreover, since “AI existential safety” is a preparadigmatic field, it is particularly important to be able to explore various non-standard ideas (government control, and, especially, military control is not conductive in this sense).
There is a much cheaper idea (which still needs some money and, more crucially, needs some extremely strong organizational skills): promote creation of small few-person short-term collaborations exploring various non-standard ideas in “AI existential safety” (basically, let’s have a lot of brainstorms, let’s have infrastructure to support those brainstorms and to support sharing their fruits, let’s figure out how to really make rapid progress here in a fashion which is a network and not a hierarchy (the problem is probably too difficult to be rapidly solved by a hierarchy)).
That would generate novel ideas. (We just need to keep in mind that anything which actually works in “AI alignment” (or, more generally, in any approach to “AI existential safety”, whether alignment-based or not) is highly likely to be dual use and a major capability booster. No one knows how to handle this correctly.)
Oh, but we do. For example (and despite my misgivings about OpenAI current approach to alignment), Ilya Sutskever is of that caliber (AlexNet, GPT-3, GPT-4, and a lot of other remarkable results speak for themselves). And now he will focus on alignment.
That being said, genius scientists are great for doing genius things, but not necessarily the best policy decision makers (e.g. von Neumann strongly advocated preventive nuclear attack against Soviet Union, and Oppenheimer after opposing thermonuclear weapons actually suggested to use an early prototype thermonuclear device in the Korean war).
So technical research and technical solutions is one thing, but decision-making and security-keeping is very different (seem to require very different people with very different skills).
I agree that more diverse orgs is good, heck I’m trying to do that on at least 1-2 fronts rn.
I’m not as up-to-date on key AI-researcher figures as I prolly should be, but big-if-true is Ilya is really JVN-level and is doing alignment and works at OpenAI, that’s a damn good combo for at least somebody to have.
Yes, assuming the first sentence of their overall approach is not excessively straightforward
It might be that a more subtle approach than “steer and control AI systems much smarter than us” is needed. (But also, they might be open to all kinds of pivoting on this.)
:-) Well, Ilya is not from Hungary, he has been born in Gorky, but otherwise he is a total and obvious first-rate Martian :-)