In reading the Singularity Institute’s research goals, and the ruminations of Yudkowski, Wei Dai, Nesov et al. in postings here, the approach to developing friendly AI which stands out the most, and from my perspective seems to just always have been the case, seems to be exclusively logic based in the vein of John McCarthy.
I am wondering how the decision was made to focus research for SIAI on the pure logic side, rather than, for example building a synthetic conscious which uses the brain as a model?
To be sure, nearly all AI approaches overlap at some point. However it seems a bit like reinventing the wheel from an almost purely theoretical perspective, when you have a very good model to work from.
I would disagree. The overwhelming majority of the average human’s life is spent peacefully. It is actually fairly remarkable how rarely we have significant conflict, especially considering the relatively overcrowded places that humans live. Not to mention that it is only a small proportion of the human population that engages other humans destructively (not by proxy).