I guess I’m one of those #2′s from the fringe, and contributed my 2 cents on Metacalus (the issue of looking for the right kind of milestones is of course related to my post in relation to current challenge). However, I completely reject ML/DL as a path toward AGI, and don’t look at anything that has happened in the past few years as being AI research (and have said that AI officially died in 2012). People in the field are not trying to solve cognitive issues, and have rejected the idea of formal definitions of intelligence (or stated that consciousness and being sentient or not isn’t an issue). I have to use AGI as a label for “stealth mode” reasons, but would prefer to separate field of study from the implementation side. And while I’m not trying to build better models of the human mind, I have come to understand consciousness in a fundamental way (and working from an embodied framework, it’s easier to see just how incapable current efforts are).
I guess I’m one of those #2′s from the fringe, and contributed my 2 cents on Metacalus (the issue of looking for the right kind of milestones is of course related to my post in relation to current challenge). However, I completely reject ML/DL as a path toward AGI, and don’t look at anything that has happened in the past few years as being AI research (and have said that AI officially died in 2012). People in the field are not trying to solve cognitive issues, and have rejected the idea of formal definitions of intelligence (or stated that consciousness and being sentient or not isn’t an issue). I have to use AGI as a label for “stealth mode” reasons, but would prefer to separate field of study from the implementation side. And while I’m not trying to build better models of the human mind, I have come to understand consciousness in a fundamental way (and working from an embodied framework, it’s easier to see just how incapable current efforts are).