CEO of Convergence, an x-risk research and impact organization.
David_Kristoffersson
This seems like a valuable research question to me. I have a project proposal in a drawer of mine that is strongly related: “Entanglement of AI capability with AI safety”.
I expect the event to have no particular downside risks, and to give interesting input and spark ideas in experts and novices alike. Mileage will vary, of course. Unconferences foster dynamic discussion and a living agenda. If it’s risky to host this event, then I expect AI strategy and forecasting meetups and discussions at EAG to be risky and they should also not be hosted.
I and other attendees of AIXSU pay careful attention to potential downside risks. I also think it’s important we don’t strangle open intellectual advancement. We need to figure out what we should talk about; not that we shouldn’t talk.
AISC: To clarify: AI safety camp is different and puts bigger trust in the judgement of novices, since teams are generally run entirely by novices. The person who proposed running a strategy AISC found the reactions from experts to be mixed. He also reckoned the event would overlap with the existing AI safety camps, since they already include strategy teams.
Potential negative side effects of strategy work is a very important topic. Hope to discuss it with attendees at the unconference!
Hello.
I’m currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I’m emulating many of Nate’s techniques. I’ll post reviews of material in the research guide at lesswrong as I work through it.
I’m mostly posting here now just to note this. I can be terse at times.
See you there.
The amount of effort going into AI as a whole ($10s of billions per year) is currently ~2 orders of magnitude larger than the amount of effort going into the kind of empirical alignment I’m proposing here, and at least in the short-term (given excitement about scaling), I expect it to grow faster than investment into the alignment work.
There’s a reasonable argument (shoutout to Justin Shovelain) that the risk is that work such as this done by AI alignment people will be closer to AGI than the work done by standard commercial or academic research, and therefore accelerate AGI more than average AI research would. Thus, $10s of billions per year into general AI is not quite the right comparison, because little of that money goes to matters “close to AGI”.
That said, on balance, I’m personally in favor of the work this post outlines.
Your relationship with other people is a macrocosm of your relationship with yourself.
I think there’s something to that, but it’s not that general. For example, some people can be very kind to others but harsh with themselves. Some people can be cruel to others but lenient to themselves.
If you can’t get something nice, you can at least get something predictable
The desire for the predictable is what Autism Spectrum Disorder is all about, I hear.
I think the healthy and compassionate response to this article would be to focus on addressing the harms victims have experienced. So I find myself disappointed by much of the voting and comment responses here.
I agree that the Bloomberg article doesn’t acknowledge that most of the harms that they list have been perpetrated by people who have already mostly been kicked out of the community, and uses some unfair framings. But I think the bigger issue is that of harms experienced by women that may not have been addressed: that of unreported cases, and of insufficient measures taken against reported ones. I don’t know if enough has been done, so it seems unwise to minimize the article and people who are upset about the sexual misconduct. And even if enough has been done in terms of responses and policy, I would prefer seeing more compassion.
Looks promising to me. Technological development isn’t by default good.
Though I agree with the other commenters that this could fail in various ways. For one thing, if a policy like this is introduced without guidance on how to analyze the societal implications, people will think of wildly different things. ML researchers aren’t by default going to have the training to analyze societal consequences. (Well, who does? We should develop better tools here.)
We can subdivide the security story based on the ease of fixing a flaw if we’re able to detect it in advance. For example, vulnerability #1 on the OWASP Top 10 is injection, which is typically easy to patch once it’s discovered. Insecure systems are often right next to secure systems in program space.
Insecure systems are right next to secure systems, and many flaws are found. Yet, the larger systems (the company running the software, the economy, etc) manage to correct somehow. It’s because there are mechanisms in the larger systems poised to patch the software when flaws are discovered. Perhaps we could fit and optimize this flaw-exploit-patch-loop in security as a technique for AI alignment.
If the security story is what we are worried about, it could be wise to try & develop the AI equivalent of OWASP’s Cheat Sheet Series, to make it easier for people to find security problems with AI systems. Of course, many items on the cheat sheet would be speculative, since AGI doesn’t actually exist yet. But it could still serve as a useful starting point for brainstorming.
This sounds like a great idea to me. Software security has a very well developed knowledge base at this point and since AI is software, there should be many good insights to port.
What possibilities aren’t covered by the taxonomy provided?
Here’s one that occurred to me quickly: Drastic technological progress (presumably involving AI) destabilizes society and causes strife. In this environment with more enmity, safety procedures are neglected and UFAI is produced.
And boxing, by the way, means giving the AI zero power.
No, hairyfigment’s answer was entirely appropriate. Zero power would mean zero effect. Any kind of interaction with the universe means some level of power. Perhaps in the future you should say nearly zero power instead so as to avoid misunderstanding on the parts of others, as taking you literally on the “zero” is apparently “legalistic”.
As to the issues with nearly zero power:
A superintelligence with nearly zero power could turn to be a heck of a lot more power than you expect.
The incentives to tap more perceived utility by unboxing the AI or building other unboxed AIs will be huge.
Mind, I’m not arguing that there is anything wrong with boxing. What’s I’m arguing is that it’s wrong to rely only on boxing. I recommend you read some more material on AI boxing and Oracle AI. Don’t miss out on the references.
I quite like the concept of alignment through coherence between the “coherence factors”!
“Wisdom” has many meanings. I would use the word differently to how the article is using it.
I think I agree with your technological argument, but I’d take your 6 months and 2.5 years and multiply them by a factor of 2-4x.
Party of it is likely that we are conceiving the scenarios a bit differently. I might be including some additional practical considerations.
Yes, that’s most of the 2-5%.
Thanks for the tip. Two other books on the subject that seem to be appreciated are Introduction to Set Theory by Karel Hrbacek and Classic Set Theory: For Guided Independent Study by Derek Goldrei.
Edit: math.se weighs in: http://math.stackexchange.com/a/264277/255573
The author of the Teach Yourself Logic study guide agrees with you about reading multiple sources:
I very strongly recommend tackling an area of logic (or indeed any new area of mathematics) by reading a series of books which overlap in level (with the next one covering some of the same ground and then pushing on from the previous one), rather than trying to proceed by big leaps.
In fact, I probably can’t stress this advice too much, which is why I am highlighting it here. For this approach will really help to reinforce and deepen understanding as you re-encounter the same material from different angles, with different emphases.
I agree with the general shape of your argument, including that Cotra and Carlsmith are likely to overestimate the compute of the human brain, and that frontier algorithms are not as efficient as algorithms could be.
My best guess is that a frontier model of the approximate expected capability of GPT-5 or GPT-6 (equivalently Claude 4 or 5, or similar advances in Gemini) will be sufficient for the automation of algorithmic exploration to an extent that the necessary algorithmic breakthroughs will be made.
But I disagree that it will happen this quickly. :)
Some quick musings on alternatives for the “self-affecting” info hazard type:
Personal hazard
Self info hazard
Self hazard
Self-harming hazard
Nice work, Wei Dai! I hope to read more of your posts soon.
However I haven’t gotten much engagement from people who work on strategy professionally. I’m not sure if they just aren’t following LW/AF, or don’t feel comfortable discussing strategically relevant issues in public.
A bit of both, presumably. I would guess a lot of it comes down to incentives, perceived gain, and habits. There’s no particular pressure to discuss on LessWrong or the EA forum. LessWrong isn’t perceived as your main peer group. And if you’re at FHI or OpenAI, you’ll have plenty contact with people who can provide quick feedback already.
Here’s the Less Wrong post for the AI Safety Camp!
I would reckon: no single AI safety method “will work” because no single method is enough by itself. The idea expressed in the post would not “solve” AI alignment, but I think it’s a thought-provoking angle on part of the problem.
Thank you for this post, Max.
My background here:
I’ve watched the Ukraine war very closely since it started.
I’m not at all familiar with nuclear risk estimations.
Summary: I wouldn’t give 70% for WW3/KABOOM from conventional NATO retaliation. I would give that 2-5% in this moment (I spent little time thinking about the precise number).
Motivation: I think conventional responses from NATO will cause Russia to generally back down. I think Putin wants to use the threat of nukes, not actually use them.
Even when cornered yet further, I expect Putin to assess that firing off nukes will make his situation even worse. Nuclear conflict would be an immense direct threat against himself and Russia, and the threat of nuclear conflict also increases the risk of people on the inside targeting him (because they don’t want to die). Authoritarians respect force. A NATO response would be a show of force.
Putin has told the Russian public in the past that Russia couldn’t win against NATO directly. Losing against NATO actually gives him a more palatable excuse: NATO is too powerful. Losing against Ukraine though, their little sibling, would be very humiliating. Losing in a contest of strength against someone supposedly weaker is almost unacceptable to authoritarians.
I think the most likely outcome is that Putin is deterred from firing a tactical nuke. And if he does fire one, NATO will respond conventionally (such as taking out the Black sea fleet), and this will cause Russia to back down in some manner.