(and why I am posting this: looking at the donations received by SIAI and having seen talk of hiring software developers, I got pascal-wagered into explaining it)
Where did you see talk of hiring software developers, and are you sure they’re not web developers or something like that? I’d be concerned too if SIAI were hiring software developers to build FAI, but based on what I know about them, it seems extremely unlikely at this point.
Yeah, no one is being hired to code AGI at SIAI right now. Software developers are for the “Center for Modern Rationality”/LessWrong side, as I understand it, e.g. creating little programs to illustrate Bayes’ rule and the like.
Eliezer wants an FAI team to undertake many years of theoretical CS and AI research before trying to code an AGI, and that research group has not even been assembled and is not currently in operation. Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Not if there is self selection for coincidence of their biases with Eliezer’s. Even worse if the reasoning you outlined is employed to lower risk estimates.
Where did you see talk of hiring software developers, and are you sure they’re not web developers or something like that? I’d be concerned too if SIAI were hiring software developers to build FAI, but based on what I know about them, it seems extremely unlikely at this point.
Yeah, no one is being hired to code AGI at SIAI right now. Software developers are for the “Center for Modern Rationality”/LessWrong side, as I understand it, e.g. creating little programs to illustrate Bayes’ rule and the like.
Eliezer wants an FAI team to undertake many years of theoretical CS and AI research before trying to code an AGI, and that research group has not even been assembled and is not currently in operation. Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Not if there is self selection for coincidence of their biases with Eliezer’s. Even worse if the reasoning you outlined is employed to lower risk estimates.