Co-founder and CEO of quiver.trade. Interested in mechanism design and neuroscience. Hopes to contribute to AI alignment.
Twitter: https://twitter.com/azsantosk
Co-founder and CEO of quiver.trade. Interested in mechanism design and neuroscience. Hopes to contribute to AI alignment.
Twitter: https://twitter.com/azsantosk
From Metaculus’ resolution criteria:
This question resolves on the date an AI system competes well enough on an IMO test to earn the equivalent of a gold medal. The IMO test must be most current IMO test at the time the feat is completed (previous years do not qualify).”The IMO test must be most current IMO test at the time the feat is completed (previous years do not qualify).”
I think this was defined on purpose to avoid such contamination. It also seems common sense to me that, when training a system to perform well on IMO 2026, you cannot include any data point from after the questions were made public.
At the same time training on previous IMO/math contest questions should be fair game. All human contestants practice quite a lot on questions from previous contents, and IMO is still very challenging for them.
Also relevent is Steven Byrnes’ excelent Against evolution as an analogy for how humans will create AGI.
It has been over two years since the publication of that post, and criticism of this analogy has continued to intensify. The OP and other MIRI members have certainly been exposed to this criticism already by this point, and as far as I am aware, no principled defense has been made of the continued use of this example.
I encourage @So8res and others to either stop using this analogy, or to argue explicitly for its continued usage, engaging with the arguments presented by Byrnes, Pope, and others.
Early 2023 I bet $500 on AI winning the IMO gold medal by 2026. This was a 1:1 bet against Michael Vassar, meaning I attributed >50% to this. It now seems very likely that I’m going to win.
To me, this was to be expected as a straightforward application of AlphaZero-like self-play amplification and destillation. The missing piece was the analogous policy network, which was a convolutional neural network for the AlphaZero board games. Once it became quite clear that existing LLMs were capable of being smart enough to generate good heuristics to this (with enough data), it seemed quite obvious to me that self-play guided by an LLM-policy heuristic would work.
Hi! I’m Kelvin, 26, and I’ve been following LessWrong since 2018. Came here after reading references to Eliezer’s AI-Box experiments from Nick Bostrom’s book.
During high school I participated in a few science olympiads, including Chemistry, Math, Biology and Informatics. Was the reserve member of the Brazilian team for the 2012 International Chemistry Olympiad.
I studied Medicine and later Molecular Science at the University of São Paulo, and dropped out in 2015 to join a high-frequency trading fund based on Brazil. Had a successful career there, and rose up to become one of the senior partners.
Since 2020 I’m co-founder and CEO of TickSpread, a crypto futures exchange based on batch auctions. We are interested in mechanism design, conditional and combinatorial markets, and futarchy.
I’m also personally very interested in machine learning, neuroscience, and AI safety discussions, and I’ve spent quite some time studying these topics on my own, despite having no professional experience on them.
I very much want to be more active on this community, participating in discussions and meeting other people who are also interested in these topics, but I’m not totally sure where to start. I would love for someone to help me get integrated here, so if you think you can do that please let me know :)
One thing that appears to be missing on the filial imprinting story is a mechanism allowing the “mommy” thought assessor to improve or at least not degrade over time.
The critical window is quite short, so many characteristics of mommy that may be very useful will not be perceived by the thought assessor in time. I would expect that after it recognizes something as mommy it is still malleable to learn more about what properties mommy has.
For example, after it recognizes mommy based on the vision, it may learn more about what sounds mommy makes, and what smell mommy has. Because these sounds/smalls are present when the vision-based mommy signal is present, the thought assessor should update to recognize sound/smell as indicative of mommy as well. This will help the duckling avoid mistaking some other ducks for mommy, and also help the ducklings find their mommy though other non-visual cues (even if the visual cues are what triggers the imprinting to begin with).
I suspect such a mechanism will be present even after the critical period is over. For example, humans sometimes feel emotionally attracted to objects that remind them or have become associated with loved ones. The attachment may be really strong (e.g. when the loved one is dead and only the object is left).
Also, your loved ones change over time, but you keep loving them! In “parental” imprinting for example, the initial imprinting is on the baby-like figure, generating a “my kid” thought assessor associated with the baby-like cues, but these need to change over time as the baby grows. So the “my kid” thought assessor has to continuously learn new properties.
Even more importantly, the learning subsystem is constantly changing, maybe even more than the external cues. If the learned representations change over time as the agent learns, the thought assessors have to keep up and do the same, otherwise their accuracy will slowly degrade over time.
This last part seems quite important for a rapidly learning/improving AGI, as we want the prosocial assessors to be robust to ontological drift. So we both want the AGI to do the initial “symbol-grounding” of desirable proto-traits close to kindness/submissiveness, and also for its steering subsystem to learn more about these concepts over time, so that they “converge” to favoring sensible concepts in an ontologically advanced world-model.
I agree that current “language agents” have some interesting safety properties. However, for them to become powerful one of two things is likely to happen:
A. The language model itself that underlies the agent will be trained/finetuned with reinforcement learning tasks to improve performance. This will make the system much more like AlphaGo, capable of generating “dangerous” and unexpected “Move 37”-like actions. Further, this is a pressure towards making the system non-interpretable (either by steering it outside “inefficient” human language, or by encoding information stenographically).
B. The base models, being larger/more powerful than the ones being used today, and more self-aware, will be doing most of the “dangerous” optimization inside the black-box. It will derive from the prompts, and from it’s long-term memory (which will be likely be given to it), what kind of dumb outer loop is running on the outside. If it has internal misaligned desires, it will manipulate the outer loop according to them, potentially generating the expected visible outputs for deception.
I will not deny the possibility of further alignment progress on language agents yielding safe agents, nor of “weak AGIs” being possible and safe with the current paradigm, and replacing humans at many “repetitive” occupations. But I expect agents derived from the “language agent” paradigm to be misaligned by default if they are strong enough optimizers to contribute meaningfully to scientific research, and other similar endeavors.
Update: “AI achieves silver-medal standard solving International Mathematical Olympiad problems”.
It now seems very likely I’m going to win this bet.
I see about ~100 book in there. I met several IMO gold-medal winners and I expect most of them to have read dozens of these books, or the equivalent in other forms. I know one who has read tens of olympiad-level books in geometry alone!
And yes, you’re right that they would often pick one or two problems as similar to what they had seen in the past, but I suspect these problems still require a lot of reasoning even after the analogy has been established. I may be wrong, though.
We can probably inform this debate by getting the latest IMO and creating a contest for people to find which existing problems are the most similar to those in the exam. :)
I think it is an interesting idea, and it may be worthwhile even if Dagon is right and it results in regulatory capture.
The reason is, regulatory capture is likely to benefit a few select companies to promote an oligopoly. That sounds bad, and it usually is, but in this case it also reduces the AI race dynamic. If there are only a few serious competitors for AGI, it is easier for them to coordinate. It is also easier for us to influence them towards best safety practices.
I agree my conception is unusual, I am ready to abandon it in favor of some better definition. At the same time I feel like an utility function having way too many components makes it useless as a concept.
Because here I’m trying to derive the utility from the actions, I feel like we can understand the being better the less information is required to encode its utility function, in a Kolmogorov complexity sense, and that if its too complex then there is no good explanation to the actions and we conclude the agent is acting somewhat randomly.
Maybe trying to derive the utility as a ‘compression’ of the actions is where the problem is, and I should distinguish more what the agent does from what the agent wants. An agent is then going to be irrational only if the wants are inconsistent with each other; if the actions are inconsistent with what it wants then it is merely incompetent, which is something else.
What we think is that we might someday build an AI advanced enough that it can, by itself, predict plans for given goal x, and execute them. Is this that otherworldly? Given current progress, I don’t think so.
I don’t think so either. AGIs will likely be capable of understanding what we mean by X and doing plans for exactly that if they want to help. Problem is the AGIs may have other goals in mind by this time.
As for re-inforcement learning, even it seems now impossible to build AGIs with utility functions on that paradigm, nothing gives us the assurance that that will be the paradigm used to be the first AGI.
Sure, it may be possible that some other paradigm allows us to have more control of the utility functions. User tailcalled mentioned John Wentworth’s research (which I will proceed to study as I haven’t done so in depth yet).
(Unless the first AGI can’t be told to do anything at all, but then we would already have lost the control problem.)
I’m afraid that this may be quite a likely outcome if we don’t make much progress in alignment research.
Regarding what the AGI will want then, I expect it to depend a lot on the training regime and on its internal motivation modules (somewhat analogous to the subcortical areas of the brain). My threat model is quite similar to the one defended by Steven Byrnes in articles such as this one.
In particular I think the AI developers will likely give the AGI “creativity modules” responsible for generating intrinsic reward whenever it finds out interesting patterns or abilities. This will help the AGI remain motivated and learning to solve harder and harder problems when outside reward is sparse, which I predict will be extremely useful to make the AGI more capable. But I expect the internalization of such intrinsic rewards to end up generating utility functions that are nearly unbounded in the value assigned to knowledge and computational power, and quite possibly hostile to us.
I don’t think all is lost though. Our brain provide us an example of a relatively-well aligned intelligence: our own higher reasoning in the telencephalon seems relatively well aligned with the evolutionary ancient primitive subcortical modules (not so much with evolution’s base objective of reproduction, though). Not sure how much work evolution had to align these two modules. I’ve heard at least one person arguing that maybe higher intelligence didn’t evolve before because of the difficulties of aligning it. If so, that would be pretty bad.
Also I’m somewhat more optimistic than others in the prospect of creating myopic AGIs that crave very much for short-term rewards that we do control. I think it might be possible (with a lot of effort) to keep such an AGI controlled in a box even if it is more intelligent than humans in general, and that such an AGI may help us with the overall control problem.
I am aware of Reinforcement Learning (I am actually sitting right next to Sutton’s book on the field, which I have fully read), but I think you are right that my point is not very clear.
The way I see it RL goals are really only the goals of the base optimizer. The agents themselves either are not intelligent (follow simple procedural ‘policies’) or are mesa-optimizers that may learn to follow something else entirely (proxies, etc). I updated the text, let me know if it makes more sense now.
My model is that the quality of the reasoning can actually be divided into two dimensions, the quality of intuition (what the “first guess” is), and the quality of search (how much better you can make it by thinking more).
Another way of thinking about this distinction is as the difference between how good each reasoning step is (intuition), compared to how good the process is for aggregating steps into a whole that solves a certain task (search).
It seems to me that current models are strong enough to learn good intuition about all kinds of things with enough high-quality training data, and that if you have good enough search you can use that as an amplification mechanism (on tasks where verification is available) to improve through self-play.
This being right then failure to solve IMO probably means a good search algorithm (analogous to AlphaZero’s MCTS-UCT, maybe including its own intuition model) has not been found that is capable of amplifying the intuitions useful for reasoning.
So far all problem-solving AIs seem to use linear or depth-first search, that is, you sample one token at a time (one reasoning step), chain them up depth-first (generate a full text/proof-sketch) check to see if it solves the full problem, and if it doesn’t work then it just tries again from scratch throwing all the partial work away. No search heuristic is used, no attempt to solve smaller problems first, etc. So it can certainly get a lot better than that (which is why I’m making the bet).
Another strong upvote for a great sequence. Social-instinct AGIs seems to me a very promising and very much overlooked approach to AGI safety. There seem to be many “tricks” that are “used by the genome” to build social instincts from ground values, and reverse engineering these tricks seem particularly valuable for us. I am eagerly waiting to read the next posts.
In a previous post I shared a success model that relies on your idea of reverse engineering the steering subsystem to build agents with motivations compatible with a safe Oracle design, including the class of reversely aligned motivations. What is your opinion on them? Do you think the set of “social instincts” we would want to incorporate into an AGI changes much if we are optimizing for reverse vs direct intent alignment?
I think you are right! Maybe I should have actually written different posts about each of these two plans.
And yes, I agree with you that maybe the most likely way of doing what I propose is getting someone ultra rich to back it. That idea has the advantage that it can be done immediately, without waiting for a Math AI to be available.
To me it still seems important to think of what kind of strategical advantages we can obtain with a Math AI. Maybe it is possible to gain a lot more than money (I gave the example of zero-day exploits, but we can most likely get a lot of other valuable technology as well).
Curious to hear your thoughts @paulfchristiano, and whether you have updated based on the latest IMO progress.
My three fundamental disagreements with MIRI, from my recollection of a ~1h conversation with Nate Soares in 2023. Please let me know if you think any positions have been misrepresented.
MIRI thinks (A) evolution is a good analogy for how alignment will fail-by-default in strong AIs, that (B) studying weak AGIs will not shine much light on how to align strong AIs, and that (C) strong narrow myopic optimizers will not be very useful for anything like alignment research.
Now my own positions:
(A) Evolution is not a good analogy for AGI.
See Steven Byrnes’ Against evolution as an analogy for how humans will create AGI.
(B) Alignment techniques for weak-but-agentic AGI are important.
Why:
In multipolar competitive scenarios, self-improvement may happen first for entire civilizations or economies, rather than for individual minds or small clusters of minds.
Techniques that work for weak-agentic-AGIs may help for aligning stronger minds. Reflection, onthological crises and self-modification makes alignment more difficult, but without strong local recursive self-improvement, it may be possible to develop techniques for better preserving alignment during these episodes, if these systems can be studied while still under control.
(C) Strong narrow myopic optimizers can be incredibly useful.
A hypothetical system capable of generating fixed-length text that strongly maximizes simple reward (e.g. expected value of next upvote) can be extremely helpful if reward is based on very careful objective evaluation. Careful judgement of adversarial “debate” setups of such systems may also generate great breakthoughts, including for alignment research.
Does AI governance needs a “Federalist papers” debate?
During the American Revolution, a federal army and government was needed to fight against the British. Many people were afraid that the powers granted to the government for that purpose would allow it to become tyrannical in the future.
If the founding fathers had decided to ignore these fears, the United States would not exist as it is today. Instead they worked alongside the best and smartest anti-federalists to build a better institution with better mechanisms and with limited powers, which allowed them to obtain the support they needed for the constitution.
Where are the federalist vs anti-federalist debates of today regarding AI regulation? Is there someone working on creating a new institution with better mechanisms to limit their power, therefore assuring those on the other side that it won’t be used a a path to totalitarianism?
I think your argument is quite effective.
He may claim he is not willing to sell you this futures contract for $0.48 now. He expects to be willing to sell for that price in the future on average, but might refuse to do so now.
But then, why? Why would you not sell something for $0.49 now if you think, on average, it’ll be worth less than that (to you) right after?
While I am sure that you have the best intentions, I believe the framing of the conversation was very ill-conceived, in a way that makes it harmful, even if one agrees with the arguments contained in the post.
For example, here is the very first negative consequence you mentioned:
I think one can argue that, this argument being correct, the post itself will exacerbate the problem by bringing greater awareness to these “intentions” in a very negative light.
The intention keyword pattern-matches with “bad/evil intentions”. Those worried about existential risk are good people, and their intentions (preventing x-risk) are good. So we should refer to ourselves accordingly and talk about misguided plans instead of anything resembling bad intentions.
People discussing pivotal acts, including those arguing that it should not be pursued, are using this expression sparingly. Moreover, they seem to be using this expression on purpose to avoid more forceful terms. Your use of scare quotes and your direct association of this expression with bad/evil actions casts a significant part of the community in a bad light.
It is important for this community to be able to have some difficult discussions without attracting backlash from outsiders, and having specific neutral/untainted terminology serves precisely for that purpose.
As others have mentioned, your preferred ‘Idea A’ has many complications and you have not convincingly addressed them. As a result, good members of our community may well find ‘Idea B’ to be worth exploring despite the problems you mention. Even if you don’t think their efforts are helpful, you should be careful to portrait them in a good light.