newsletter.safe.ai
Dan H
This seems like a fun exercise, so I spent half an hour jotting down possibilities. I’m more interested in putting potential considerations on peoples’ radars and helping with brainstorming than I am in precision. None of these points are to be taken too seriously since this is fairly extemporaneous and mostly for fun.
2022
Multiple Codex alternatives are available. The financial viability of training large models is obvious.
Research models start interfacing with auxiliary tools such as browsers, Mathematica, and terminals.
2023
Large pretrained models are distinctly useful for sequential decision making (SDM) in interactive environments, displacing previous reinforcement learning research in much the same way BERT rendered most previous work in natural language processing wholly irrelevant. Now SDM methods don’t require as much tuning, can generalize with fewer samples, and can generalize better.
For all of ImageNet’s 1000 classes, models can reliably synthesize images that are realistic enough to fool humans.
Models have high enough accuracy to pass the multistate bar exam.
Models for contract review and legal NLP see economic penetration; it becomes a further source of economic value and consternation among attorneys and nontechnical elites. This indirectly catalyzes regulation efforts.
Programmers become markedly less positive about AI due to the prospect of reducing demand of some of their labor.
~10 trillion parameter (nonsparse) models attain human-level accuracy on LAMBADA (a proxy for human-level perplexity) and expert-level accuracy on LogiQA (a proxy for nonsymbolic reasoning skills). With models of this size, multiple other capabilities(this gives proxies for many capabilities) are starting to be useful, whereas with smaller models these capabilities were too unreliable to lean on. (Speech recognition started “working” only after it crossed a certain reliability threshold.)
Generated data (math, code, models posing questions for themselves to answer) help ease data bottleneck issues since Common Crawl is not enough. From this, many capabilities are bootstrapped.
Elon re-enters the fight to build safe advanced AI.
2024
A major chatbot platform offers chatbots personified through video and audio.
Although forms of search/optimization are combined with large models for reasoning tasks, state-of-the-art models nonetheless only obtain approximately 40% accuracy on MATH.
Chatbots are able to provide better medical diagnoses than nearly all doctors.
Adversarial robustness for CIFAR-10 (assuming an attacker with eps=8/255) is finally over 85%.
Video understanding finally reaches human-level accuracy on video classification datasets like Something Something V2. This comports with the heuristic that video understanding is around 10 years behind image understanding.
2025
Upstream vision advancements help autonomous driving but do not solve it for all US locations, as the long tail is really long.
ML models are competitive forecasters on platforms like Metaculus.
Nearly all AP high school homework and exam questions (including long-form questions) can be solved by answers generated from publicly available models. Similar models cut into typical Google searches since these models give direct and reliable answers.
Contract generation is now mostly automatable, further displacing attorneys.
2026
Machine learning systems become great at using Metasploit and other hacking tools, increasing the accessibility, potency, success rate, scale, stealth, and speed of cyberattacks. This gets severe enough to create global instability and turmoil. EAs did little to use ML to improve cybersecurity and reduce this risk.
- 8 Apr 2022 18:15 UTC; 10 points) 's comment on aogara’s Quick takes by (EA Forum;
In safety research labs in academe, we do not have a resource edge compared to the rest of the field.
We do not have large GPU clusters, so we cannot train GPT-2 from scratch or fine-tune large language models in a reasonable amount of time.
We also do not have many research engineers (currently zero) to help us execute projects. Some of us have safety projects from over a year ago on the backlog because there are not enough reliable people to help execute the projects.
These are substantial bottlenecks that more resources could resolve.
RE: “like I’m surprised if a clever innovation does more good than spending 4x more compute”
Earlier this year, DeBERTaV2 did better on SuperGLUE than models 10x the size and got state of the art.
Models such as DeBERTaV3 can do better than on commonsense question answering tasks than models that are tens or several hundreds of times larger.
DeBERTaV3-large
Accuracy: 84.6 1 Parameters: 0.4B
T5-11B
Accuracy: 83.5 1 Parameters: 11B
Fine-tuned GPT-3
73.0 1 175B
https://arxiv.org/pdf/2112.03254.pdf#page=5
Bidirectional models + training ideas + better positional encoding helped more than 4x.
Note I’m mainly using this as an opportunity to talk about ideas and compute in NLP.
I don’t know how big an improvement DeBERTaV2 is over SoTA.
DeBERTaV2 is pretty solid and mainly got its performance from an architectural change. Note the DeBERTa paper was initially uploaded in 2020, but it was updated early this year to include DeBERTa V2. The previous main popular SOTA on SuperGLUE was T5 (which beat RoBERTa). DeBERTaV2 uses 8x fewer parameters and 4x less compute than T5. DeBERTa’s high performance isn’t an artifact of SuperGLUE; in downstream tasks such as some legal NLP tasks it does better too.
Compared to unidirectional models on NLU tasks, the bidirectional models do far better. On CommonsenseQA, a good task that’s been around for a few years, the bidirectional models do far better than fine-tuned GPT-3--DeBERTaV3 differs in three ideas from GPT-3 (roughly encoding, ELECTRA training, and bidirectionality, if I recall correctly), and it’s >400x smaller.
I agree with the overall sentiment that much of the performance is from brute compute, but even in NLP, ideas can help sometimes. For vision/continuous signals, algorithmic advances continue to account for much progress; ideas move the needle substantially more frequently in vision than in NLP.
For tasks when there is less traction, ideas are even more useful. Just to use a recent example, “the use of verifiers results in approximately the same performance boost as a 30x model size increase.” I think the initially proposed heuristic depends on how much progress has already been made on a task. For nearly solved tasks, the next incremental idea shouldn’t help much. On new hard tasks such as some maths tasks, scaling laws are worse and ideas will be a practical necessity. Not all the first ideas are obvious “low hanging fruits” because it might take a while for the community to get oriented and find good angles of attack.
This is why we introduced X-Risk Sheets, a questionnaire that researchers should include in their paper if they’re claiming that their paper reduces AI x-risk. This way researchers need to explain their thinking and collect evidence that they’re not just advancing capabilities.
We now include these x-risk sheets in our papers. For example, here is an example x-risk sheet included in an arXiv paper we put up yesterday.
I should say formatting is likely a large contributing factor for this outcome. Tom Dietterich, an arXiv moderator, apparently had a positive impression of the content of your grokking analysis. However, research on arXiv will be more likely to go live if it conforms to standard (ICLR, NeurIPS, ICML) formatting and isn’t a blogpost automatically exported into a TeX file.
Here’s a continual stream of related arXiv papers available through reddit and twitter.
I am strongly in favor of our very best content going on arXiv. Both communities should engage more with each other.
As follows are suggestions for posting to arXiv. As a rule of thumb, if the content of a blogpost didn’t take >300 hours of labor to create, then it probably should not go on arXiv. Maintaining a basic quality bar prevents arXiv from being overriden by people who like writing up many of their inchoate thoughts; publication standards are different for LW/AF than for arXiv. Even if a researcher spent many hours on the project, arXiv moderators do not want research that’s below a certain bar. arXiv moderators have reminded some professors that they will likely reject papers at the quality level of a Stanford undergraduate team project (e.g., http://cs231n.stanford.edu/2017/reports.html); consequently labor, topicality, and conforming to formatting standards is not sufficient for arXiv approval. Usually one’s first research project won’t be good enough for arXiv. Furthermore, conceptual/philosophical pieces probably should be primarily posted on arXiv’s .CY section. For more technical deep learning content, do not make the mistake of only putting it on .AI; these should probably go on .LG (machine learning) or .CV (computer vision) or .CL (NLP). arXiv’s .ML section is for more statistical/theoretical machine learning audiences. For content to be approved without complications, it should likely conform to standard (ICLR, ICML, NeurIPS, CVPR, ECCV, ICCV, ACL, EMNLP) formatting. This means automatic blogpost exporting is likely not viable. In trying to diffuse ideas to the broader ML community, we should avoid making the arXiv moderators mad at us.
Salient examples are robustness and RLHF. I think following the implied strategy—of avoiding any safety work that improves capabilities (“capability externalities”)---would be a bad idea.
There are plenty of topics in robustness, monitoring, and alignment that improve safety differentially without improving vanilla upstream accuracy: most adversarial robustness research does not have general capabilities externalities; topics such as transparency, trojans, and anomaly detection do not; honesty efforts so far do not have externalities either. Here is analysis of many research areas and their externalities.
Even though the underlying goal is to improve the safety-capabilities ratio, this is not the best decision-making policy. Given uncertainty, the large incentives for making models superhuman, motivated reasoning, and competition pressures, aiming for minimal general capabilities externalities should be what influences real-world decision-making (playing on the criterion of rightness vs. decision procedure distinction).
If safety efforts are to scale to a large number of researchers, the explicit goal should be to measurably avoid general capabilities externalities rather than, say, “pursue particular general capabilities if you expect that it will help reduce risk down the line,” though perhaps I’m just particularly risk-averse. Without putting substantial effort in finding out how to avoid externalities, the differentiation between safety and capabilities at many places is highly eroded, and in consequence some alignment teams are substantially hastening timelines. For example, an alignment team’s InstructGPT efforts were instrumental in making ChatGPT arrive far earlier than it would have otherwise, which is causing Google to become substantially more competitive in AI and causing many billions to suddenly flow into different AGI efforts. This is decisively hastening the onset of x-risks. I think minimal externalities may be a standard that is not always met, but I think it should be more strongly incentivized.
making them have non-causal decision theories
How does it distinctly do that?
Sorry, I am just now seeing since I’m on here irregularly.
So any robustness work that actually improves the robustness of practical ML systems is going to have “capabilities externalities” in the sense of making ML products more valuable.
Yes, though I do not equate general capabilities with making something more valuable. As written elsewhere,
It’s worth noting that safety is commercially valuable: systems viewed as safe are more likely to be deployed. As a result, even improving safety without improving capabilities could hasten the onset of x-risks. However, this is a very small effect compared with the effect of directly working on capabilities. In addition, hypersensitivity to any onset of x-risk proves too much. One could claim that any discussion of x-risk at all draws more attention to AI, which could hasten AI investment and the onset of x-risks. While this may be true, it is not a good reason to give up on safety or keep it known to only a select few. We should be precautious but not self-defeating.
I’m discussing “general capabilities externalities” rather than “any bad externality,” especially since the former is measurable and a dominant factor in AI development. (Identifying any sort of externality can lead people to say we should defund various useful safety efforts because it can lead to a “false sense of security,” which safety engineering reminds us this is not the right policy in any industry.)
I disagree even more strongly with “honesty efforts don’t have externalities:” AI systems confidently saying false statements is a major roadblock to lots of applications (e.g. any kind of deployment by Google), so this seems huge from a commercial perspective.
I distinguish between honesty and truthfulness; I think truthfulness was way too many externalities since it is too broad. For example, I think Collin et al.’s recent paper, an honesty paper, does not have general capabilities externalities. As written elsewhere,
Encouraging models to be truthful, when defined as not asserting a lie, may be desired to ensure that models do not willfully mislead their users. However, this may increase capabilities, since it encourages models to have better understanding of the world. In fact, maximally truth-seeking models would be more than fact-checking bots; they would be general research bots, which would likely be used for capabilities research. Truthfulness roughly combines three different goals: accuracy (having correct beliefs about the world), calibration (reporting beliefs with appropriate confidence levels), and honesty (reporting beliefs as they are internally represented). Calibration and honesty are safety goals, while accuracy is clearly a capability goal. This example demonstrates that in some cases, less pure safety goals such as truth can be decomposed into goals that are more safety-relevant and those that are more capabilities-relevant.
I agree that interpretability doesn’t always have big capabilities externalities, but it’s often far from zero.
To clarify, I cannot name a time a state-of-the-art model drew its accuracy-improving advancement from interpretability research. I think it hasn’t had a measurable performance impact, and anecdotally empirical researchers aren’t gaining insights from that the body of work which translate to accuracy improvements. It looks like a reliably beneficial research area.It also feels like people are using “capabilities” to just mean “anything that makes AI more valuable in the short term,”
I’m taking “general capabilities” to be something like
general prediction, classification, state estimation, efficiency, scalability, generation, data compression, executing clear instructions, helpfulness, informativeness, reasoning, planning, researching, optimization, (self-)supervised learning, sequential decision making, recursive self-improvement, open-ended goals, models accessing the Internet, …
These are extremely general instrumentally useful capabilities that improve intelligence. (Distinguish from models that are more honest, power averse, transparent, etc.) For example, ImageNet accuracy is the main general capabilities notion in vision, because it’s extremely correlated with downstream performance on so many things. Meanwhile, an improvement for adversarial robustness harms ImageNet accuracy and just improves adversarial robustness measures. If it so happened that adversarial robustness research became the best way to drive up ImageNet accuracy, then the capabilities community would flood in and work on it, and safety people should then instead work on other things.
Consequently what counts at safety should be informed by how the empirical results are looking, especially since empirical phenomena can be so unintuitive or hard to predict in deep learning.
Empiricists think the problem is hard, AGI will show up soon, and if we want to have any hope of solving it, then we need to iterate and take some necessary risk by making progress in capabilities while we go.
This may be so for the OpenAI alignment team’s empirical researchers, but other empirical researchers note we can work on several topics to reduce risk without substantially advancing general capabilities. (As far as I can tell, they are not working on any of the following topics, rather focusing on an avenue to scalable oversight which, as instantiated, mostly serves to make models generally better at programming.)
Here are four example areas with minimal general capabilities externalities (descriptions taken from Open Problems in AI X-Risk):
Trojans—AI systems can contain “trojan” hazards. Trojaned models behave typically in most situations, but when specific secret situations are met, they reliably misbehave. For example, an AI agent could behave normally, but when given a special secret instruction, it could execute a coherent and destructive sequence of actions. In short, this area is about identifying hidden functionality embedded in models that could precipitate a treacherous turn. Work on detecting trojans does not improve general language model or image classifier accuracy, so the general capabilities externalities are moot.
Anomaly detection—This area is about detecting potential novel hazards such as unknown unknowns, unexpected rare events, or emergent phenomena. (This can be used for tripwires, detecting proxy gaming, detecting trojans, malicious actors, possibly for detecting emergent goals.) In anomaly detection, general capabilities externalities are easy to avoid.
Power Aversion—This area is about incentivizing models to avoid gaining more power than is necessary and analyzing how power trades off with reward. This area is deliberately about measuring and making sure highly instrumentally useful/general capabilities are controlled.
Honesty—Honest AI involves creating models that only output what they hold to be true. It also involves determining what models hold to be true, perhaps by analyzing their internal representations. Honesty is a narrower concept than truthfulness and is deliberately chosen to avoid capabilities externalities, since truthful AI is usually a combination of vanilla accuracy, calibration, and honesty goals. Optimizing vanilla accuracy is optimizing general capabilities. When working towards honesty rather than truthfulness, it is much easier to avoid capabilities externalities.
More general learning resources are at this course, and more discussion of safety vs capabilities is here (summarized in this video).
When ML models get more competent, ML capabilities researchers will have strong incentives to build superhuman models. Finding superhuman training techniques would be the main thing they’d work on. Consequently, when the problem is more tractable, I don’t see why it’d be neglected by the capabilities community—it’d be unreasonable for profit maximizers not to have it as a top priority when it becomes tractable. I don’t see why alignment researchers have to work in this area with high externalities now and ignore other safe alignment research areas (in practice, the alignment teams with compute are mostly just working on this area). I’d be in favor of figuring out how to get superhuman supervision for specific things related to normative factors/human values (e.g., superhuman wellbeing supervision), but researching superhuman supervision simpliciter will be the aim of the capabilities community.
Don’t worry, the capabilities community will relentlessly maximize vanilla accuracy, and we don’t need to help them.
“AI Safety” which often in practice means “self driving cars”
This may have been true four years ago, but ML researchers at leading labs rarely directly work on self-driving cars (e.g., research on sensor fusion). AV is has not been hot in quite a while. Fortunately now that AGI-like chatbots are popular, we’re moving out of the realm of talking about making very narrow systems safer. The association with AV was not that bad since it was about getting many nines of reliability/extreme reliability, which was a useful subgoal. Unfortunately the world has not been able to make a DL model completely reliable in any specific domain (even MNIST).
Of course, they weren’t talking about x-risks, but neither are industry researchers using the word “alignment” today to mean they’re fine-tuning a model to be more knowledgable or making models better satisfy capabilities wants (sometimes dressed up as “human values”).
If you want a word that reliably denotes catastrophic risks that is also mainstream, you’ll need to make catastrophic risk ideas mainstream. Expect it to be watered down for some time, or expect it not to go mainstream.
Thermodynamics theories of life can be viewed as a generalization of Darwinism, though in my opinion the abstraction ends up being looser/less productive, and I think it’s more fruitful just to talk in evolutionary terms directly.
You might find these useful:
Could these sorts of posts have more thorough related works sections? It’s usually standard for related works in empirical papers to mention 10+ works. Update: I was looking for a discussion of https://arxiv.org/abs/2212.04089, assumed it wasn’t included in this post, and many minutes later finally found a brief sentence about it in a footnote.
- 19 May 2023 12:54 UTC; 26 points) 's comment on Steering GPT-2-XL by adding an activation vector by (
Background for people who understandably don’t habitually read full empirical papers:
Related Works sections in empirical papers tend to include many comparisons in a coherent place. This helps contextualize the work and helps busy readers quickly identify if this work is meaningfully novel relative to the literature. Related works must therefore also give a good account of the literature. This helps us more easily understand how much of an advance this is. I’ve seen a good number of papers steering with latent arithmetic in the past year, but I would be surprised if this is the first time many readers of AF/LW have seen it, which would make this paper seem especially novel. A good related works section would more accurately and quickly communicate how novel this is. I don’t think this norm is gatekeeping nor pedantic; it becomes essential when the number of papers becomes high.The total number of cited papers throughout the paper is different from the number of papers in the related works. If a relevant paper is buried somewhere randomly in a paper and not contrasted with explicitly in the related works section, that is usually penalized in peer review.
Yes, I was—good catch. Earlier and now, unusual formatting/and a nonstandard related works is causing confusion. Even so, the work after the break is much older. The comparison to works such as https://arxiv.org/abs/2212.04089 is not in the related works and gets a sentence in a footnote: “That work took vectors between weights before and after finetuning on a new task, and then added or subtracted task-specific weight-diff vectors.”
Is this big difference? I really don’t know; it’d be helpful if they’d contrast more. Is this work very novel and useful, and that one isn’t any good for alignment? Or did Ludwig Schmidt (not x-risk pilled) and coauthors in Editing Models with Task Arithmetic (made public last year and is already published) come up with an idea similar to, according to a close observer, “the most impressive concrete achievement in alignment I’ve seen”? If so, what does that say about the need to be x-risk motivated to do relevant research, and what does this say about group epistemics/ability to spot relevant progress if it’s not posted on the AF?
Context: I’m an OpenPhil fellow who is doing work on robustness, machine ethics, and forecasting.
I published several papers on the research called for in Concrete Problems in AI Safety and OpenPhil’s/Steinhardt’s AI Alignment Research Overview. The work helped build a trustworthy ML community and aimed at reducing accident risks given very short AI timelines. Save for the first paper I helped with (when I was trying to learn the ropes), the motivation for the other dozen or so papers was always safety.
These papers have nothing to do with RL and are about DL, and they do not fit in with the type of technical research shared on this forum, which might be why these are not considered “highly relevant.” Some (not all) of the OpenPhil fellows are working on safety, though with OpenPhil’s broader research agenda.