Do you agree that risks from artificial intelligence have to be taken very seriously?
How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
The second of these implies that you’re interested in existential risks, but the first should state that explicitly. Otherwise most people will interpret “risks from artificial intelligence” to include things like people losing their jobs.
The second of these implies that you’re interested in existential risks, but the first should state that explicitly. Otherwise most people will interpret “risks from artificial intelligence” to include things like people losing their jobs.
Previous posts worth mentioning:
http://lesswrong.com/r/discussion/lw/4rx/singularity_and_friendly_ai_in_the_dominant_ai/
http://lesswrong.com/lw/2zv/nils_nilssons_ai_history_the_quest_for_artificial/2wc8