I am writing a few papers and a book on machine ethics and superintelligence.
My goals with this work are to:
Summarize the existing literature from machine ethics on how to design the motivational system of artificial moral agents (a surprisingly little-discussed problem so far; probably less than 5,000 pages in the academic press!) and apply it to the specific problem of superintelligence.
Update and strengthen the Good / Chalmers argument for why a superintelligence is likely to arise within a few centuries if global catastrophe or active prevention do not occur.
Explain in detail why a few dozen commonly proposed “solutions” to the problem of Friendly AI will not work. (Basically, catch everybody up to where Eliezer Yudkowsky was as of about 2004.)
Translate the contributions of the SIAI community to machine ethics into the language of mainstream philosophy and science, to give SIAI more credibility and attract more elites to the cause of solving the Friendly AI problem.
I am writing a few papers and a book on machine ethics and superintelligence.
My goals with this work are to:
Summarize the existing literature from machine ethics on how to design the motivational system of artificial moral agents (a surprisingly little-discussed problem so far; probably less than 5,000 pages in the academic press!) and apply it to the specific problem of superintelligence.
Update and strengthen the Good / Chalmers argument for why a superintelligence is likely to arise within a few centuries if global catastrophe or active prevention do not occur.
Explain in detail why a few dozen commonly proposed “solutions” to the problem of Friendly AI will not work. (Basically, catch everybody up to where Eliezer Yudkowsky was as of about 2004.)
Translate the contributions of the SIAI community to machine ethics into the language of mainstream philosophy and science, to give SIAI more credibility and attract more elites to the cause of solving the Friendly AI problem.