Third, I do tentatively hold that p(eAI | attempt towards FAI) > p(eAI | attempt towards AGI).
Clearly, this is possible. If an FAI team comes to think this is true during development, I hope they’ll reconsider their plans. But can you provide, or link me to, some reasons for suspecting that p(eAI | attempt towards FAI) > p(eAI | attempt towards AGI)?
Clearly, this is possible. If an FAI team comes to think this is true during development, I hope they’ll reconsider their plans. But can you provide, or link me to, some reasons for suspecting that p(eAI | attempt towards FAI) > p(eAI | attempt towards AGI)?
Some relevant posts/comments:
http://lesswrong.com/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/5ylx
http://lesswrong.com/lw/axj/the_ai_design_space_near_the_fai_draft/
http://lesswrong.com/lw/axj/the_ai_design_space_near_the_fai_draft/623p