Yeah, no one is being hired to code AGI at SIAI right now. Software developers are for the “Center for Modern Rationality”/LessWrong side, as I understand it, e.g. creating little programs to illustrate Bayes’ rule and the like.
Eliezer wants an FAI team to undertake many years of theoretical CS and AI research before trying to code an AGI, and that research group has not even been assembled and is not currently in operation. Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Not if there is self selection for coincidence of their biases with Eliezer’s. Even worse if the reasoning you outlined is employed to lower risk estimates.
Yeah, no one is being hired to code AGI at SIAI right now. Software developers are for the “Center for Modern Rationality”/LessWrong side, as I understand it, e.g. creating little programs to illustrate Bayes’ rule and the like.
Eliezer wants an FAI team to undertake many years of theoretical CS and AI research before trying to code an AGI, and that research group has not even been assembled and is not currently in operation. Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Not if there is self selection for coincidence of their biases with Eliezer’s. Even worse if the reasoning you outlined is employed to lower risk estimates.