Regarding v1 of the “Agent Foundations...” paper (then called “Aligning Superintelligence with Human Interests: A Technical Research Agenda”), the original file is here.
To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I’ve made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.
Regarding the term “alignment” as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyone started using it publicly. We ran with “(AI) alignment” instead of “value alignment” because we didn’t want people to equate the value learning problem with the whole alignment problem.
(I also think “value alignment” is confusing because it can be read as saying humans and AI systems both have values, and we’re trying to bring the two parties’ values into alignment. This conflicts with the colloquial use of “values,” which treats it as more of a human thing, compared to more neutral terms like “goals” or “preferences.” And Eliezer has historically used “values” to specifically refer to humanity’s true preferences.)
Footnote: Looks like MIRI was using “Friendly AI” in our research agenda drafts as of Oct. 23, and we switched to “aligned AI” by Nov. 20 (though we were using phrasings like “reliably aligned with the intentions of its programmers” earlier than that).
Regarding v1 of the “Agent Foundations...” paper (then called “Aligning Superintelligence with Human Interests: A Technical Research Agenda”), the original file is here.
To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I’ve made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.
Regarding the term “alignment” as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyone started using it publicly. We ran with “(AI) alignment” instead of “value alignment” because we didn’t want people to equate the value learning problem with the whole alignment problem.
(I also think “value alignment” is confusing because it can be read as saying humans and AI systems both have values, and we’re trying to bring the two parties’ values into alignment. This conflicts with the colloquial use of “values,” which treats it as more of a human thing, compared to more neutral terms like “goals” or “preferences.” And Eliezer has historically used “values” to specifically refer to humanity’s true preferences.)
Footnote: Looks like MIRI was using “Friendly AI” in our research agenda drafts as of Oct. 23, and we switched to “aligned AI” by Nov. 20 (though we were using phrasings like “reliably aligned with the intentions of its programmers” earlier than that).