Any opinions on where Goertzel’s stuff stands in relation to whatever there is that passes for state of the art in AGI research?
Depends on how you dereference “AGI research”. The term was invented by Goertzel et al to describe what OpenCog is, so at least from that standpoint it is very relevant. Stepping back, among people who actually bother to make the AI/AGI distinction, OpenCog is definately one giant influential project in this relatively small field. It’s not a monoculture community though, and there are other influential AGI projects with very different designs. But OpenCog is cerntainly a heavy-weight contender.
Of course there is also the groups which don’t make the AI/AGI distinction, such as most of the machine learning & perception crowds, and Kurzweil et al. These people think they can achieve general intelligence through layering narrow AI techniques or direct emulation, and probably think very little of integrative methods pursued by Goertzel.
And is it even worth trying to have this conversation on LW?
Can you elaborate? I’m not sure I understand the question. Why wouldn’t this be a great place to discuss AGI?
Why wouldn’t this be a great place to discuss AGI?
Because LW has been around for 5 or so years, and I’ve remember seeing very little nuts and bolts AI discussion at the level of, say, Starglider’s AI Mini-FAQ happen here, very few discussion about deep technical details of something like IBM’s recent AI work, whatever goes on at DeepMind and things like that. Of course there are going to be trade secrets involved, but beyond pretty much just AIXI, I don’t even see much ambient awareness about whatever publicly known technical methods there are that the companies are probably basing their stuff on. It’s as if the industry was busy fielding automobiles, biplanes and tanks while the majority at LW still had trouble figuring out the basic concepts of steam power.
LW can discuss the philosophy part, but I don’t see much capability around that could go actually look through Goertzel’s design and go “this thing looks like a non-starter because recognized technical problem X”, “this thing resembles successful design Y, it’s probably worth studying more closely” or “this thing has a really novel and interesting attack for known technical problem Z, even if the rest is junk that part definitely needs close studying” for instance. And I don’t think the philosophy is going to stay afloat for very long if it’s practitioners aren’t able to follow the technical details of what people are actually doing in the domain they’d like to philosophize about.
I was going to respond with a biting “well then what the heck is the point of LW?” post, but I think you got the point:
I don’t think the philosophy is going to stay afloat for very long if it’s practitioners aren’t able to follow the technical details of what people are actually doing in the domain they’d like to philosophize about.
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
I was hoping more of a study technical AI details and post about them here, but whatever works. If you do find a better community, post a note here somewhere.
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
My goal is to enact a positive singularity. To that end I’m not convinced of the instrumentality of educating people on the interwebs, given other things I could be doing.
I had thought that a community with a tight focus on ‘friendly AGI’ would be interested in learning, and discussing how such an AGI might actually be constructed, or otherwise getting involved in some way. If not, I don’t think it’s worth my time to correct this mistake.
Depends on how you dereference “AGI research”. The term was invented by Goertzel et al to describe what OpenCog is, so at least from that standpoint it is very relevant. Stepping back, among people who actually bother to make the AI/AGI distinction, OpenCog is definately one giant influential project in this relatively small field. It’s not a monoculture community though, and there are other influential AGI projects with very different designs. But OpenCog is cerntainly a heavy-weight contender.
Of course there is also the groups which don’t make the AI/AGI distinction, such as most of the machine learning & perception crowds, and Kurzweil et al. These people think they can achieve general intelligence through layering narrow AI techniques or direct emulation, and probably think very little of integrative methods pursued by Goertzel.
Can you elaborate? I’m not sure I understand the question. Why wouldn’t this be a great place to discuss AGI?
Because LW has been around for 5 or so years, and I’ve remember seeing very little nuts and bolts AI discussion at the level of, say, Starglider’s AI Mini-FAQ happen here, very few discussion about deep technical details of something like IBM’s recent AI work, whatever goes on at DeepMind and things like that. Of course there are going to be trade secrets involved, but beyond pretty much just AIXI, I don’t even see much ambient awareness about whatever publicly known technical methods there are that the companies are probably basing their stuff on. It’s as if the industry was busy fielding automobiles, biplanes and tanks while the majority at LW still had trouble figuring out the basic concepts of steam power.
LW can discuss the philosophy part, but I don’t see much capability around that could go actually look through Goertzel’s design and go “this thing looks like a non-starter because recognized technical problem X”, “this thing resembles successful design Y, it’s probably worth studying more closely” or “this thing has a really novel and interesting attack for known technical problem Z, even if the rest is junk that part definitely needs close studying” for instance. And I don’t think the philosophy is going to stay afloat for very long if it’s practitioners aren’t able to follow the technical details of what people are actually doing in the domain they’d like to philosophize about.
I was going to respond with a biting “well then what the heck is the point of LW?” post, but I think you got the point:
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
I was hoping more of a study technical AI details and post about them here, but whatever works. If you do find a better community, post a note here somewhere.
Michael Wilson, looks like.
My goal is to enact a positive singularity. To that end I’m not convinced of the instrumentality of educating people on the interwebs, given other things I could be doing.
I had thought that a community with a tight focus on ‘friendly AGI’ would be interested in learning, and discussing how such an AGI might actually be constructed, or otherwise getting involved in some way. If not, I don’t think it’s worth my time to correct this mistake.
Oh really? :-D