So you can only imagine the excitement when, a few years after LessWrong was established, we found that *literally all every part of your brain does is make Bayesian updates to minimize prediction errors. *Scott Alexander wrote several excited articles about it. I wrote several excited articles about it. And most everyone else said ok cool story we’re gonna ignore it and just do AI posting now.
Did we “find” this? That’s not what seems to me to have happened.
As far as I can tell, this is a speculative theory that some people found to be very interesting. Others said “well, perhaps, this is certainly an interesting way of thinking about some things, although there seem to be some hiccups, and maybe it has some correspondence to actual reality, maybe not—in any case do you have a more concrete model and stronger evidence?” and got the answer “more research is needed”. Alright, cool, by all means, proceed, research more.
And then nothing much came of it, and so of course it was thereafter mostly ignored. What is there to say about it or do with it?
It’s still everyone involved’s best guess as far as I can tell, it’s just not a field that moves that fast or is that actionable (especially compared to AI).
Did we “find” this? That’s not what seems to me to have happened.
As far as I can tell, this is a speculative theory that some people found to be very interesting. Others said “well, perhaps, this is certainly an interesting way of thinking about some things, although there seem to be some hiccups, and maybe it has some correspondence to actual reality, maybe not—in any case do you have a more concrete model and stronger evidence?” and got the answer “more research is needed”. Alright, cool, by all means, proceed, research more.
And then nothing much came of it, and so of course it was thereafter mostly ignored. What is there to say about it or do with it?
It’s still everyone involved’s best guess as far as I can tell, it’s just not a field that moves that fast or is that actionable (especially compared to AI).