Some dates in your list of bibliography are like: 1905-07-10 which seems to be an error.
I created my full bibliography and just for sake of completeness put it here.
List of my AI Safety related articles (many are coauthored), published, drafted and planned: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence—published, Informatica
Military AI as a Convergent Goal of Self-Improving AI—published, “AI Safety and security”
Classification of Global Catastrophic Risks Connected with Artificial Intelligence—published “AI and Society”
Predictions of the Near-Term Global Catastrophic Risks of Artificial Intelligence—published (under the name “Assessing the future plausibility of catastrophically dangerous AI”) in “Futures”
The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI—published in “Journal of British interplanetary Society”
Classification of the Global Solutions of the AI Safety Problem—won a Good AI prize, submitted.
Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons—draft
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”—not intended to be published in current form, but probably the most important of all works, as it is actionable in current form. Scheduled for revision in 2019.
“Decisive strategic advantage via Narrow AI”—LW post, submitted.
Levels of self-improvement—draft, LW post
The map of “Levels of defence” in AI safety—LW post
“Possible Dangers of the Unrestricted Value Learners”—LW post
“AI nanny via human upload”—early draft
“Catching treacherous turn: different ideas about AI boxing”—early draft
Hidden assumptions in the idea that humans have values—AI Safety Camp project, to be finished in early 2019.
Some dates in your list of bibliography are like: 1905-07-10 which seems to be an error.
I created my full bibliography and just for sake of completeness put it here.
List of my AI Safety related articles (many are coauthored), published, drafted and planned:
Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence—published, Informatica
Military AI as a Convergent Goal of Self-Improving AI—published, “AI Safety and security”
Classification of Global Catastrophic Risks Connected with Artificial Intelligence—published “AI and Society”
Predictions of the Near-Term Global Catastrophic Risks of Artificial Intelligence—published (under the name “Assessing the future plausibility of catastrophically dangerous AI”) in “Futures”
The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI—published in “Journal of British interplanetary Society”
Classification of the Global Solutions of the AI Safety Problem—won a Good AI prize, submitted.
Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons—draft
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”—not intended to be published in current form, but probably the most important of all works, as it is actionable in current form. Scheduled for revision in 2019.
“Decisive strategic advantage via Narrow AI”—LW post, submitted.
Levels of self-improvement—draft, LW post
The map of “Levels of defence” in AI safety—LW post
“Possible Dangers of the Unrestricted Value Learners”—LW post
“AI nanny via human upload”—early draft
“Catching treacherous turn: different ideas about AI boxing”—early draft
Hidden assumptions in the idea that humans have values—AI Safety Camp project, to be finished in early 2019.