[LINK] “Moral Machines” article in the New Yorker links to SI paper

Link

Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.

The discussion itself is mainly concerned with the behavior of self-driving cars and robot soldiers rather than FAI, but Marcus does obliquely reference the prickliness of the problem. After briefly introducing wireheading (presumably as an example of what can go wrong), he links to http://​​singularity.org/​​files/​​SaME.pdf, saying:

Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire.

He also mentions FHI and Yale Bioethics Center along with SingInst:

A tiny cadre of brave-hearted souls at Oxford, Yale, and the Berkeley California Singularity Institute are working on these problems, but the annual amount of money being spent on developing machine morality is tiny.

It’s a mainstream introduction, and perhaps not the best or most convincing one, but I think it’s a positive development that machine ethics is getting a serious treatment in the mainstream media.