There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.
The short-term goal seems more modest—prove that self-improving agents can have stable goal structures.
If true, that would be fascinating—and important. I don’t know what the chances of success are, but Yudkowsky’s pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick.
That’s a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is—IMHO—a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.
The short-term goal seems more modest—prove that self-improving agents can have stable goal structures.
If true, that would be fascinating—and important. I don’t know what the chances of success are, but Yudkowsky’s pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick.
That’s a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is—IMHO—a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.