But don’t be too quick to write off Factored Cognition entirely based on that. The fact that it’s a problem doesn’t mean it’s unsolvable.
I agree. I’m always inclined to say something like “I’m a bit skeptical about factored cognition, but I guess maybe it could work, who knows, couldn’t hurt to try”, but then I remember that I don’t need to say that because practically everyone else thinks that too, even its most enthusiastic advocates, , as far as I can tell from my very light and casual familiarity with it.
I haven’t read any of the posts you’ve linked
Hmm, maybe if you were going to read just one of mine on this particular topic, it should be instead Can You Get AGI From A Transformer instead of the one I linked above. Meh, either way.
I’ve read them both, plus a bunch of your other posts. I think understanding the brain is pretty important for analyzing Factored Cognition—my problem (and this is one I have in general) was that I find it almost impossibly difficult to just go and learn about a field I don’t yet know anything about without guidance. That’s why I had just accepted that I’m writing the sequence without engaging with the literature on neuroscience. Your posts have helped with that, though, so thanks.
Fortunately, insofar as I’ve understood things correctly, your framework (which I know is a selection of theories from the literature and not uncontroversial) appears to agree with everything I’ve written in the sequence. More generally, I find the generative model picture strongly aligns with introspection, which has been my guide so far. When I pay attention to how I think about a difficult problem, and I’ve done that a lot while writing the sequence, it feels very much like waiting for the right hypothesis/explanation to appear, and not like reasoning backward. The mechanism that gives an illusion of control is precisely the fact that we decompose and can think about subquestions, so that part is a sort of reasoning backward on a high level—but at bottom, I’m purely relying on my brain to just spit out explanations.
Anyway, now I can add some (albeit indirect) reference to the neuroscience literature into that part of the sequence, which is nice :-)
Thanks! Haha, nothing wrong with introspection! It’s valid data, albeit sometimes misinterpreted or overgeneralized. Anyway, looking forward to your future posts!
I agree. I’m always inclined to say something like “I’m a bit skeptical about factored cognition, but I guess maybe it could work, who knows, couldn’t hurt to try”, but then I remember that I don’t need to say that because practically everyone else thinks that too, even its most enthusiastic advocates, , as far as I can tell from my very light and casual familiarity with it.
Hmm, maybe if you were going to read just one of mine on this particular topic, it should be instead Can You Get AGI From A Transformer instead of the one I linked above. Meh, either way.
I’ve read them both, plus a bunch of your other posts. I think understanding the brain is pretty important for analyzing Factored Cognition—my problem (and this is one I have in general) was that I find it almost impossibly difficult to just go and learn about a field I don’t yet know anything about without guidance. That’s why I had just accepted that I’m writing the sequence without engaging with the literature on neuroscience. Your posts have helped with that, though, so thanks.
Fortunately, insofar as I’ve understood things correctly, your framework (which I know is a selection of theories from the literature and not uncontroversial) appears to agree with everything I’ve written in the sequence. More generally, I find the generative model picture strongly aligns with introspection, which has been my guide so far. When I pay attention to how I think about a difficult problem, and I’ve done that a lot while writing the sequence, it feels very much like waiting for the right hypothesis/explanation to appear, and not like reasoning backward. The mechanism that gives an illusion of control is precisely the fact that we decompose and can think about subquestions, so that part is a sort of reasoning backward on a high level—but at bottom, I’m purely relying on my brain to just spit out explanations.
Anyway, now I can add some (albeit indirect) reference to the neuroscience literature into that part of the sequence, which is nice :-)
Thanks! Haha, nothing wrong with introspection! It’s valid data, albeit sometimes misinterpreted or overgeneralized. Anyway, looking forward to your future posts!