I’d say I’m closer to Camp B. I get, at least conceptually, how we might arrive at ASI from Eliezer’s earlier writings—but I don’t really know how it would actually be developed in practice. Especially when it comes to the idea that scalability could somehow lead to emergence or self-reference, I just don’t see any solid technical or scientific basis for that kind of qualitative leap yet.
As Douglas Hofstadter suggested in Gödel, Escher, Bach, the essence of human cognition lies in its self-referential nature, the ability to move between levels of thought, and the capacity to choose what counts as figure and what counts as ground in one’s own perception. I’m not even sure AGI—let alone ASI—could ever truly have that.
To put it in Merleau-Ponty’s terms, intelligence requires embodiment; the world is always a world of something; and consciousness is inherently perspectival. I know it sounds a bit old-fashioned to bring phenomenology into such a cutting-edge discussion, but I still think there are hints worth exploring in both Gödelian semiotics and Merleau-Ponty’s phenomenology.
Ultimately, my point is this: just as a body isn’t designed top-down but grown bottom-up, intelligence—at least artificial intelligence as we’re building it—seems more like something engineered than cultivated. That’s why I feel current discussions around ASI still lack concreteness. Maybe that’s something LessWrong could focus on more—going beyond math and economics, and bringing in perspectives from structuralism and biology as well.
And of course, if you’d like to talk more about this, I’d really welcome that.
I’d say I’m closer to Camp B. I get, at least conceptually, how we might arrive at ASI from Eliezer’s earlier writings—but I don’t really know how it would actually be developed in practice. Especially when it comes to the idea that scalability could somehow lead to emergence or self-reference, I just don’t see any solid technical or scientific basis for that kind of qualitative leap yet.
As Douglas Hofstadter suggested in Gödel, Escher, Bach, the essence of human cognition lies in its self-referential nature, the ability to move between levels of thought, and the capacity to choose what counts as figure and what counts as ground in one’s own perception. I’m not even sure AGI—let alone ASI—could ever truly have that.
To put it in Merleau-Ponty’s terms, intelligence requires embodiment; the world is always a world of something; and consciousness is inherently perspectival. I know it sounds a bit old-fashioned to bring phenomenology into such a cutting-edge discussion, but I still think there are hints worth exploring in both Gödelian semiotics and Merleau-Ponty’s phenomenology.
Ultimately, my point is this: just as a body isn’t designed top-down but grown bottom-up, intelligence—at least artificial intelligence as we’re building it—seems more like something engineered than cultivated. That’s why I feel current discussions around ASI still lack concreteness. Maybe that’s something LessWrong could focus on more—going beyond math and economics, and bringing in perspectives from structuralism and biology as well.
And of course, if you’d like to talk more about this, I’d really welcome that.