The Bitter Lesson applies to almost all attempts to build additional structure into neural networks, it turns out.
Out of curiosity, what are the other exceptions to this besides the obvious one of attention?
The Bitter Lesson applies to almost all attempts to build additional structure into neural networks, it turns out.
Out of curiosity, what are the other exceptions to this besides the obvious one of attention?
Upvoted because this mentions Nonlinear Network.
Some of your YouTube links are broken because the equals sign got escaped as “%3D”. If I were you I’d spend a minute to fix that.
Have you read https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message yet?
I had some similar thoughts to yours before reading that, but it helped me make a large update in favor of superintelligence being able to make magical-seeming feats of deduction. If a large number of smart humans working together for a long time can figure something out (without performing experiments or getting frequent updates of relevant sensory information), then a true superintelligence will also be able to.
Hilarious… I fixed my error
Reminds me of this from Scott Alexander’s Meditations on Moloch:
Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.
Keenan Pepper
What I gather from https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8 is that it’s sort of like what you’re saying but it’s much more about predictions than actual experiences. If the Learning Subsystem is imagining a plan predicted to have high likelihood of smelling sex pheromones, seeing sexy body shapes, experiencing orgasm, etc. then the Steering Subsystem will reward the generation of that plan, basically saying “Yeah, think more thoughts like that!”.
The Learning Subsystem has a bunch of abstract concepts and labels for things the Steering Subsystem doesn’t care about (and can’t even access), but there are certain hardcoded reward channels it can understand. But the important thing is the reward signals can be evaluated for imagined worlds as well as the real immediate world.
I’m trying to read this post now but it looks like a bunch of images (of math) are missing. Does that match what others see?