The reason I didn’t link to that discussion is that it was kind of tangential to what will be my main points. My goal is to understand the natural setting of the learning problem, not the specifics of how humans solve it.
But you’ve made assumptions that will keep you from finding that setting. Your approach already commits itself to treating humans as a blank slate. But humans aren’t “blank slate with great algorithm”; they’re “heavily formatted slate with respectable context-specific algorithm”.
Let’s postpone this debate until the main points become a bit more clear. I don’t think of myself as “treating humans” at all, much less as a blank slate!
Could you at least give some signal of your idea’s quality that distinguishes it from the millions with hopeless ideas who scream “You guys are doing it all wrong, I’ve got something that’s just totally different from everything else and will get it right this time”?
Because a lot of what you’ve said so far isn’t promising.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can’t find a sign of anything promising, and you’ve had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.
I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.
LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky’s lessons on rationality; I’d hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
The reason I didn’t link to that discussion is that it was kind of tangential to what will be my main points. My goal is to understand the natural setting of the learning problem, not the specifics of how humans solve it.
But you’ve made assumptions that will keep you from finding that setting. Your approach already commits itself to treating humans as a blank slate. But humans aren’t “blank slate with great algorithm”; they’re “heavily formatted slate with respectable context-specific algorithm”.
Let’s postpone this debate until the main points become a bit more clear. I don’t think of myself as “treating humans” at all, much less as a blank slate!
Could you at least give some signal of your idea’s quality that distinguishes it from the millions with hopeless ideas who scream “You guys are doing it all wrong, I’ve got something that’s just totally different from everything else and will get it right this time”?
Because a lot of what you’ve said so far isn’t promising.
Yikes, take it easy. When I said “let’s argue”, I meant let’s argue after I’ve made some of my main points.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can’t find a sign of anything promising, and you’ve had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.
I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.
LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky’s lessons on rationality; I’d hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
Which will be soon, right?