Maybe treat The Last Evolution more like you treated Runaround. Just condense the relevant points.
Arthur C. Clarke is better known as an author, so I’d prefer to see him listed as “futurist and author.” The last sentence of Clarke’s quote is just going to feed the dreaded fourth definition of the singularity, and should probably be dropped.
The Vinge quote seems unnecessary, since you’ve quoted Lukasiewicz with a much more directly relevant quote about unpredictability.
I then want to see a little more logical structure, more than just saying “FAI is AI that has a positive impact.” Maybe frame FAI in response to Lukasiewicz’s quote, in terms of being rigorously able to predict that some AI will have a positive impact.
Was FAI or machine ethics mentioned in Chalmers’ paper? Will these topics be discussed in the folllow-up issue? If so, say so, if not, say less, or say why this is still important for the friendly AI concept.
The last paragraph then suddenly jumps. Maybe start with a “despite their parallel yada yada.” Does the machine ethics literature cite friendly AI literature?
Because CEV predates the stuff you were talking about just above, I’d rather see a short mention of it at the end of the “Eliezer Yudkowsky paragraph.” Maybe just call it (Yudkowsky 2004) - the important part isn’t the idea of CEV, it’s that one of the prongs of FAI is goal systems that can be predicted to have positive impact.
Maybe treat The Last Evolution more like you treated Runaround. Just condense the relevant points.
Arthur C. Clarke is better known as an author, so I’d prefer to see him listed as “futurist and author.” The last sentence of Clarke’s quote is just going to feed the dreaded fourth definition of the singularity, and should probably be dropped.
The Vinge quote seems unnecessary, since you’ve quoted Lukasiewicz with a much more directly relevant quote about unpredictability.
I then want to see a little more logical structure, more than just saying “FAI is AI that has a positive impact.” Maybe frame FAI in response to Lukasiewicz’s quote, in terms of being rigorously able to predict that some AI will have a positive impact.
Was FAI or machine ethics mentioned in Chalmers’ paper? Will these topics be discussed in the folllow-up issue? If so, say so, if not, say less, or say why this is still important for the friendly AI concept.
The last paragraph then suddenly jumps. Maybe start with a “despite their parallel yada yada.” Does the machine ethics literature cite friendly AI literature?
Because CEV predates the stuff you were talking about just above, I’d rather see a short mention of it at the end of the “Eliezer Yudkowsky paragraph.” Maybe just call it (Yudkowsky 2004) - the important part isn’t the idea of CEV, it’s that one of the prongs of FAI is goal systems that can be predicted to have positive impact.
Thanks! Agree with all this except I’ll keep the Vinge quote.