YC batches have grown 3x since 2016. I expect a significant market saturation / low hanging fruit effect, reducing the customer base of each startup compared to when there were only 200/year.
Tao Lin
I’m surprised that’s the question. I would guess that’s not what Eliezer means because he says Dath Ilan is responding sufficiently to AI risk but also hints at Dath Ilan still spending a significant fraction of its resources on AI safety (I’ve only read a fraction of the work here, maybe wrong). I have a background belief that the largest problems don’t change that much, and it’s rare for problems to go from #1 problem to not-in-top-10 problems, and that most things have diminishing returns such that it’s not worthwhile to solve them so thoroughly. An alternative definition that’s spiritually similar that I like more is; “What policy could governments implement such that the improving the AI x-risk policy would now not be the #1 priority, if the governments were wise.”. This isolates AI / puts it in context of other global problems, such that the AI solution doesn’t need to prevent governments from changing their minds over the next 100 years or whatever needs to happen for the next 100 years to go well.
I would expect aerodynamic maneuvering MIRVS to work and not be prohibitively expensive. The closest deployed version appears to be https://en.wikipedia.org/wiki/Pershing_II which has 4 large fins. You likely don’t need that much steering force
I really struggle to think of problems you want to wait 2.5 years to solve—when you identify a problem, you usually want to work on solving it within the month. Just update most of the way now + a tiny bit over time as evidence comes in. As others commented, no doom by 2028 is very little evidence
I heard some rumors that gpt 4.5 got good pretraining loss but bad downstream performance. If that’s true the loss scaling laws may have worked correctly. If not, yeah a lot of things can go wrong and something did, whether that’s hardware issues, software bugs, or machine learning problems or problems with their earlier experiments
This is OpenAI cot style. See it in the original o1 blog post. https://openai.com/index/learning-to-reason-with-llms/
I can imagine scenarios where you could end up with more resources from causing vacuum decay without extortion. Like if you care about doing something with resources quickly and other agents want to use resources slowly, then if you cause vacuum decay inside your region, the non collapsed shell of your region becomes more valuable to you relative to other agents because it only exists for a short duration, and maybe that makes other agents fight over it less. Or maybe you can vacuum decay into a state that still supports life and you value that
Whether you can cause various destructive chain reactions is pretty important. If locusts could benefit from causing vacuum collapse, or could trigger star supernova, or could efficiently collapse various bodies into black holes, that could easily eat up large fractions of the universe.
No, AC actually moves 2-3x as much heat as it’s input power, so a 1500W AC will extract an additional 3000W from inside and dump 4500W outside
This overestimates the impact of large models on external safety research. My impression is that the AI safety community has barely used deepseek r1 and v3 open source weights at all. I checked again and still see little evidence of v3/r1 weights in safety research. People use r1 distill 8b, and qwq 32b, but the decision to open source the most capable small model is different than the decision to open source the frontier. So then it matters when 8b or 32b models can assist with bioterrorism, which happens a bit later, and we get most of the benefits of open source until then. It’s also cheaper to filter virology or even all biology data out of training for a small models pre training data because it wouldn’t cause customers to switch providers (customers prefer large model anyway) and small models are more often narrowly focused on math or coding.
What are your API costs, and how do they compare to the $ raised?
I can somewhat see where you’re coming from about a new method being orders of magnitude more data efficient in RL, but I very strongly bet on transformers being core even after such a paradigm shift. I’m curious whether you think the transformer architecture and text input/output need to go, or whether the new training procedure / architecture fits in with transformers because transformers are just the best information mixing architecture.
Calibration is a super important signal of quality because it means you can actually act on the given probabilities! Even if someone is gaming calibration by betting given ratios on certain outcomes, you can still bet on their predictions and not lose money (often). That is far better than other news sources such as tweets or NYT or whatever. If a calibrated predictor and a random other source are both talking about the same thing, the fact that the predictor is calibrated is enough to make them the #1 source on that topic.
Incest is not a subcategory of sexual violence, and it’s unethical for unrelated reasons. Then again I see the appeal of sexual violence porn but not incest porn, and maybe incest appeals to other people because they conflate it with violence?
Some compute dependent advancements are easier to extrapolate from small scale than others. For instance, I strongly suspect that small scale experiments + naively extrapolating memory usage is sufficient to discover (and be confident in) GQA. Note that the gpt-4 paper predicted the performance of gpt-4 from 1000x scaled down experiments! The gpt-4 scaling law extrapolation, and similar scaling laws work, is proof that a lot of advances can be extrapolated from much smaller compute scale.
Gpt-4.1 is an expecially soulless model. It’s intended for API use only, whereas chatgpt-latest is meant to chat with humans. It’s not as bad as o1-mini—that model is extremely autistic and has no concept of emotion. This would work much better with ~pretrained models. Likely you can get gpt-4-base or llama 405b base to do much better with just prompting and no RL.
Note that any competent capital holder has significant conflict of interest with AI, AI is already a significant fraction of the stock market and a pause would bring down most capital, not just private lab equity
I agree frontier models severely lack spatial reasoning on images, which I attribute to a lack of in-depth spatial discussion of images on the internet. My model of frontier models’ vision capabilities is that they have very deep knowledge of aspects of images that relate to text that happens to be immediately before or after it in web text, and only a very small fraction of images on the internet have accompanying in-depth spatial discussion. The models are very good at for instance guessing the location of where photos were taken, vastly better than most humans, because locations are more often mentioned around photos. I expect that if labs want to, they can construct enough semi-synthetic data to fix this.
Yeah they may be the same weights. The above quote does not absolutely imply the same weights generate the text and images IMO, just that it’s based on the 4o and sees the whole prompt. OpenAI’s audio generation is also ‘native’, but it’s served as a separate model on the API with different release dates, and you can’t mix audio and some function calling in chatgpt in a way that’s consistent with them not actually being the same weights.
Note that since Paul started working for the US government a few years ago, he has withdrawn from public discussion of AI safety to avoid PR and conflict of interests, so going off his writings are significantly behind his current beliefs.