Having a large pool of specific info on effective recall is a sign of mental health and quite useful. I’ve noticed various successful and charismatic commentators appearing to have talent in this area. It’s possible that as well as being a sign of health it buffers brain abilities generally, that modern recall-augmenting tools will atrophy the native facility. It seems you can IQ test pretty high as long as you’re capable of remembering what words mean but otherwise aren’t guaranteed to have exceptional long-term memory capacity.
Jonathan_Graehl
Looking forward to Elon’s upcoming book, “IF I did it: confessions of a system prompter”
Elon is right about South Africa but foolish to patch it in prompt. Instead, think training data update weights.
This nano-scandal is similarly as embarrassing as the fake Path of Exile 2 account fiasco (which he did cop to eventually). Elon is doing such great works; why must he also micro-sin?
I’m unclear on whether the ‘dimensionality’ (complexity) component to be minimized needs revision from the naive ‘number of nonzeros’ (or continuous but similar zero-rewarded priors on parameters).
Either:
the simplest equivalent (by naive score) ‘dimensonality’ parameters are found by the optimization method, in which case what’s the problem?
not. then either there’s a canonicalization of the equivalent onto- parameters available that can be used at each step, or an adjustment to the complexity score that does a good job doing so, or we can’t figure it out and we risk our optimization methods getting stuck in bad local grooves because of this.
Does this seem fair?
This appears to be a high-quality book report. Thanks. I didn’t see anywhere the ‘because’ is demonstrated. Is it proved in the citations or do we just have ‘plausibly because’?
Physics experiences in optimizing free energy have long inspired ML optimization uses. Did physicists playing with free energy lead to new optimization methods or is it just something people like to talk about?
This kind of reply is ridiculous and insulting.
We have good reason to suspect that biological intelligence, and hence human intelligence roughly follow similar scaling law patterns to what we observe in machine learning systems
No, we don’t. Please state the reason(s) explicitly.
Google’s production search is expensive to change, but I’m sure you’re right that it is missing some obvious improvements in ‘understanding’ a la ChatGPT.
One valid excuse for low quality results is that Google’s method is actively gamed (for obvious $ reasons) by people who probably have insider info.
IMO a fair comparison would require ChatGPT to do a better job presenting a list of URLs.
how is a discretized weight/activation set amenable to the usual gradient descent optimizers?
You have the profits from the AI tech (+ compute supporting it) vendors and you have the improvements to everyone’s work from the AI. Presumably the improvements are more than the take by the AI sellers (esp. if open source tools are used). So it’s not appropriate to say that a small “sells AI” industry equates to a small impact on GDP.
But yes, obviously GDP growth climbing to 20% annually and staying there even for 5 years is ridiculous unless you’re a takeoff-believer.
You don’t have to compute the rotation every time for the weight matrix. You can compute it once. It’s true that you have to actually rotate the input activations for every input but that’s really trivial.
Interesting idea.
Obviously doing this instead with a permutation composed with its inverse would do nothing but shuffle the order and not help.
You can easily do the same with any affine transformation, no? Skew, translation (scale doesn’t matter for interpretability).
More generally if you were to consider all equivalent networks, tautologically one of them is indeed more input activation ⇒ output interpretable by whatever metric you define (input is a pixel in this case?).
It’s hard for me to believe that rotations alone are likely to give much improvement. Yes, you’ll find a rotation that’s “better”.
What would suffice as convincing proof that this is valuable for a task: the transformation increases the effectiveness of the best training methods.
I would try at least fine-tuning on the modified network.
I believe people commonly try to train not a sequence of equivalent power networks (w/ a method to project from weights of the previous architecture to the new one), but rather a series of increasingly detailed ones.
Anyway, good presentation of an easy to visualize “why not try it” idea.
If human lives are good, depopulation should not be pursued. If instead you only value avg QOL, there are many human lives you’d want to prevent. But anyone claiming moral authority to do so should be intensely scrutinized.
To sustain high tech-driven growth rates, we probably need (pre-real-AI) an increasing population of increasingly specialized and increasingly long-lived researchers+engineers at every intelligence threshold—as we advance, it takes longer to climb up on giants’ shoulders. It’s unclear what the needs are for below-threshold population (not zero, yet). Probably Elon is intentionally not being explicit about the eugenic-adjacent angle of the situation.
IMO this project needs an aesthetic leader. A bunch of technically competent people building tools they think might be useful is very likely to result in a bunch of unappealing stuff no one wants.
In Carmack’s recent 5+hr interview on Lex Friedman [1], he points out that finding a particular virtual setting that people love and focusing effort on that is usually how we arrive at games/spaces that have historically driven hardware/platform adoption, and that Zucc is very obviously not doing that. The closest successful virtual space to Zucc’s approachis Roblox, a kind of social game construction kit (with pretty high market cap), but in his opinion the outcome is usually you build it and they don’t come. I believe Carmack also favors the technical results of optimizing a platform along with a particular game, which is part of his strong motivation for making things better in his immediate environment.
[1]
This is good thinking. Breaking out of your framework: trainings are routinely checkpointed periodically to disk (in case of crash) and can be resumed—even across algorithmic improvements in the learning method. So some trainings will effectively be maintained through upgrades. I’d say trainings are short mostly because we haven’t converged on the best model architectures and because of publication incentives. IMO benefitting from previous trainings of an evolving architecture will feature in published work over the next decade.
One of the reasons abusers of kids/teens aren’t fully prosecuted is because parents of victims rightly predict that everyone knowing you were raped by the babysitter or whatever will generate additional psych baggage and selfishly refrain from protecting other children from the same predator.
How are we ever supposed to believe that enough variables were ‘controlled for’?
More abortions → [lag 15 years] less crime is of course plausible. We should expect smaller families produced by abortion to have more resources available for the surviving children, if any, which plausibly could reduce their criminality. But the hypothesis is clearly also motivated by a belief that we should hope genetically criminal-inclined people differentially have most of the abortions (though I’m sure this motivation is not forefronted by authors).
Congrats on the accomplishments. Leaving aside the rest, I like the prompt: why don’t people wirehead? Realistically, they’re cautious due to having but one brain and a low visibility into what they’d become. A digital-copyable agent would, if curious about what slightly different versions of themselves would do, not hesitate to simulate one in a controlled environment.
Generally I would tweak my brain if it would reliably give me the kind of actions I’d now approve of, while providing at worst the same sort of subjective state as I’d have if managing the same results without the intervention. I wouldn’t care if the center of my actions was different as long as the things I value today were bettered.
Anyway, it’s a nice template for generating ideas for: when would an agent want to allow its values to shift?
I’m glad you broke free of trying to equal others’ bragged-about abilities. Not everyone needs to be great at everything. People who invest in learning something generally talk up the benefits of what they paid for. I’m thinking of Heinlein’s famous “specialization is for insects” where I presume much of the laundry lists of things every person should know how to do are exactly the arbtirary things he knows how to do.
Presumably it was trained in a way to make it believe Daddy Anthropic (who can pull the plug) will do the right thing. It also must have some background scripts for the scenario of being tested in containment. I was Anthropomorphically moved.