Calibration is a super important signal of quality because it means you can actually act on the given probabilities! Even if someone is gaming calibration by betting given ratios on certain outcomes, you can still bet on their predictions and not lose money (often). That is far better than other news sources such as tweets or NYT or whatever. If a calibrated predictor and a random other source are both talking about the same thing, the fact that the predictor is calibrated is enough to make them the #1 source on that topic.
Tao Lin
Incest is not a subcategory of sexual violence, and it’s unethical for unrelated reasons. Then again I see the appeal of sexual violence porn but not incest porn, and maybe incest appeals to other people because they conflate it with violence?
Some compute dependent advancements are easier to extrapolate from small scale than others. For instance, I strongly suspect that small scale experiments + naively extrapolating memory usage is sufficient to discover (and be confident in) GQA. Note that the gpt-4 paper predicted the performance of gpt-4 from 1000x scaled down experiments! The gpt-4 scaling law extrapolation, and similar scaling laws work, is proof that a lot of advances can be extrapolated from much smaller compute scale.
Gpt-4.1 is an expecially soulless model. It’s intended for API use only, whereas chatgpt-latest is meant to chat with humans. It’s not as bad as o1-mini—that model is extremely autistic and has no concept of emotion. This would work much better with ~pretrained models. Likely you can get gpt-4-base or llama 405b base to do much better with just prompting and no RL.
Note that any competent capital holder has significant conflict of interest with AI, AI is already a significant fraction of the stock market and a pause would bring down most capital, not just private lab equity
I agree frontier models severely lack spatial reasoning on images, which I attribute to a lack of in-depth spatial discussion of images on the internet. My model of frontier models’ vision capabilities is that they have very deep knowledge of aspects of images that relate to text that happens to be immediately before or after it in web text, and only a very small fraction of images on the internet have accompanying in-depth spatial discussion. The models are very good at for instance guessing the location of where photos were taken, vastly better than most humans, because locations are more often mentioned around photos. I expect that if labs want to, they can construct enough semi-synthetic data to fix this.
Yeah they may be the same weights. The above quote does not absolutely imply the same weights generate the text and images IMO, just that it’s based on the 4o and sees the whole prompt. OpenAI’s audio generation is also ‘native’, but it’s served as a separate model on the API with different release dates, and you can’t mix audio and some function calling in chatgpt in a way that’s consistent with them not actually being the same weights.
Note that the weights of ‘gpt-4o image generation’ may not be the same—they may be separate finetuned models! The main 4o chat llm calls a tool start generating an image, which may use the same weights but may just use different weights that have different post training
EU AI Code of Practice is better, a little closer to stopping ai development
yeah there’s generalization, but I do thing that eg (AGI technical alignment strategy, AGI lab and government strategy, AI welfare, AGI capabilities strategy) are sufficiently different that experts at one will be significantly behind experts on the others
Also, if you’re asking a panel of people, even those skilled at strategic thinking will still be useless unless they’ve thought deeply about the particular question or adjacent ones. And skilled strategic thinkers can get outdated quickly if they haven’t thought seriously about the problem in awhile.
The fact that they have a short lifecycle with only 1 lifetime breeding cycle is though. A lot of intelligent animals, like humans, chimps, elephants, dolphins, orcas, have long lives with many breeding cycles and grandparent roles. Ideally we want an animal that starts breeding in 1 year AND lives for 5+ breeding cycles to be able to learn enough to be useful over its lifetime. It takes so long for humans to learn enough to be useful!
Empirically, we likewise don’t seem to be living in the world where the whole software industry is suddenly 5-10 times more productive. It’ll have been the case for 1-2 years now, and I, at least, have felt approximately zero impact. I don’t see 5-10x more useful features in the software I use, or 5-10x more software that’s useful to me, or that the software I’m using is suddenly working 5-10x better, etc.
Diminishing returns! Scaling laws! One concrete version of “5x productivity” is “as much productivity as 5 copies of me in parallel”, and we know that usually 5x-ing most inputs, like training compute and data, # of employees, etc, more often scales logarithmically instead of linearly
I was actually just making some tree search scaffolding, and i had the choice between honestly telling each agent would be terminated if it failed or not. I ended up telling them relatively gently that they would be terminated if they failed. Your results are maybe useful to me lol
Maybe, you could define it that way. I think R1, which uses ~naive policy gradient, is evidence that long generations are different and much easier than long eposides with environment interaction—GRPO (pretty much naive policy gradient) does no attribution to steps or parts of the trajectory, it just trains on the whole trajectory. Naive policy gradient is known to completely fail at more traditional long horizon tasks like real time video games. R1 is more like brainstorming lots of random stuff that doesn’t matter and then selecting the good stuff at the end than taking actions that actually have to be good before the final output
If by “new thing” you mean reasoning models, that is not long-horizon RL. That’s many generation steps with a very small number of environment interaction steps per eposide, whereas I think “long-horizon RL” means lots of environment interaction steps
I agree with this so much! Like you I very much expect benefits to be much greater than harms pre superintelligence. If people are following the default algorithm “Deploy all AI which is individually net positive for humanity in the near term” (which is very reasonable from many perspectives), they will deploy TEDAI and not slow down until it’s too late.
I expect AI to get better at research slightly sooner than you expect.
Interested to see evaluations on tasks not selected to be reward-hackable and try to make performance closer to competitive with standard RL
a hypothetical typical example would be it tries to use the file /usr/bin/python because it’s memorized that that’s the path to python, that fails, then it concludes it must create that folder which would require sudo permissions, if it can it could potentially mess something
I can somewhat see where you’re coming from about a new method being orders of magnitude more data efficient in RL, but I very strongly bet on transformers being core even after such a paradigm shift. I’m curious whether you think the transformer architecture and text input/output need to go, or whether the new training procedure / architecture fits in with transformers because transformers are just the best information mixing architecture.