That’s a lot of things done, congratulations!
Adrià Garriga-alonso
An evaluation of circuit evaluation metrics
Ophiology (or, how the Mamba architecture works)
That’s very cool, maybe I should try to do that for important talks. Though I suppose almost always you have slide aid, so it may not be worth the time investment.
Maybe being a guslar is not so different from telling a joke 2294 lines long
That’s a very good point! I think the level of ability required is different but it seems right.
The guslar’s songs are (and were of course already in the 1930-1950s) also printed, so the analogy may be closer than you thought.
Is there a reason I should want to?
I don’t know, I can’t tell you that. If I had to choose I also strongly prefer literacy.
But I didn’t know there was a tradeoff there! I thought literacy was basically unambiguously positive—whereas now I think it is net highly positive.
Also I strongly agree with frontier64 that the skill that is lost is rough memorization + live composition, which is a little different.
It’s definitely not exact memorization, but it’s almost more impressive than that, it’s rough memorization + composition to fit the format.
They memorize the story, with particular names; and then sing it with consitent decasyllabic metre and rhyme. Here’s an example song transcribed with its recording: Ropstvo Janković Stojana (The Captivity of Janković Stojan)
the collection: https://mpc.chs.harvard.edu/lord-collection-1950-51/
Does literacy remove your ability to be a bard as good as Homer?
Folks generally don’t need polyamory to enjoy this benefit, but I’m glad you get it from that!
If you’re still interested in this, we have now added Appendix N to the paper, which explains our final take.
Sure, but then why not just train a probe? If we don’t care about much precision what goes wrong with the probe approach?
Here’s a reasonable example where naively training a probe fails. The model lies if any of N features is “true”. One of the features is almost always activated at the same time as some others, such that in the training set it never solely determines whether the model lies.
Then, a probe trained on the activations may not pick up on that feature. Whereas if we can look at model weights, we can see that this feature also matters, and include it in our lying classifier.
This particular case can also be solved by adversarially attacking the probe though.
Thomas Kwa’s research journal
Thank you, that makes sense!
Indefinite integrals would make a lot more sense this way, IMO
Why so? I thought they already made sense, they’re “antiderivatives”, so a function such that taking its derivative gives you the original functions. Do you need anything further to define them?
(I know about the definite integral Riemann and Lebesgue definitions, but I thought indefinite integrals were much easier in comparison.
In such a case, I claim this is just sneaking in bayes rule without calling it by name, and this is not a very smart thing to do, because the bayesian frame gives you a bunch more leverage on analyzing the system
I disagree. An inductive bias is not necessarily a prior distribution. What’s the prior?
I don’t think I understand your model of why neural networks are so effective. It sounds like you say that on the one hand neural networks have lots of parameters, so you should expect them to be terrible, but they are actually very good because SGD is a such a shitty optimizer on the other hand that it acts as an implicit regularizer.
Yeah, that’s basically my model. How it regularizes I don’t know. Perhaps the volume of “simple” functions is the main driver of this, rather than gradient descent dynamics. I think the randomness of it is important; full-gradient descent (no stochasticity) would not work nearly as well.
This seems false if you’re interacting with a computable universe, and don’t need to model yourself or copies of yourself
Reasonable people disagree. Why should I care about the “limit of large data” instead of finite-data performance?
OK, let’s look through the papers you linked.
This one is interesting. It argues that the regularization properties are not in SGD, but rather in the NN parameterization, and that non-gradient optimizers also find simple solutions which generalize well. They talk about Bayes only in a paragraph in page 3. They say that literature that argues that NNs work well because they’re Bayesian is related (which is true—it’s also about generalization and volumes). But I see little evidence that the explanation in this paper is an appeal to Bayesian thinking. A simple question for you: what prior distribution do the NNs have, according to the findings in this paper?
This paper finds that the probability that SGD finds a function is correlated with the posterior probability of a Gaussian process conditioned on the same data. Except if you use the Gaussian process they’re using to do predictions, it does not work as well as the NN. So you can’t explain that the NN works well by appealing that it’s similar to this particular Bayesian posterior.
SLT; “Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition”
I have many problems with SLT and a proper comment will take me a couple extra hours. But also I could come away thinking that it’s basically correct, so maybe this is the one.
In short, the probability distribution you choose contains lots of interesting assumptions about what states are more likely that you didn’t necessarily intend. As a result most of the possible hypotheses have vanishingly small prior probability and you can never reach them. Even though with a frequentist approach
For example, let us consider trying to learn a function with 1-dim numerical input and output (e.g. ). Correspondingly, your hypothesis space is the set of all such functions. There are very many functions (infinitely many if , otherwise a crazy number).
You could use the Solomonoff prior (on a discretized version of this), but that way lies madness. It’s uncomputable, and most of the functions that fit the data may contain agents that try to get you to do their bidding, all sorts of problems.
What other prior probability distribution can we place on the hypothesis space? The obvious choice in 2023 is a neural network with random weights. OK, let’s think about that. What architecture? The most sensible thing is to randomize over architectures somehow. Let’s hope the distribution on architectures is as simple as possible.
How wide, how deep? You don’t want to choose an arbitrary distribution or (god forbid) arbitrary number, so let’s make it infinitely wide and deep! It turns out that an infinitely wide network just collapses to a random process without any internal features. It turns out an infinitely deep network, but that collapses to a stationary distribution which doesn’t depend on the input. Oops.
Okay, let’s give up and place some arbitrary distribution (e.g. geometric distribution) on the width.
What about the prior on weights? uh idk, zero-mean identity covariance Gaussian? Our best evidence says that this sucks.
At this point you’ve made so many choices, which have to be informed by what empirically works well, that it’s a strange Bayesian reasoner you end up with. And you haven’t even specified your prior distribution yet.
Thank you! Could you please provide more context? I don’t know what ‘E’ you’re referring to.