Computing scientist and Systems architect. Currently doing self-funded AI/AGI safety research. I participate in AI standardization under the company name Holtman Systems Research: https://holtmansystemsresearch.nl/
Koen.Holtman(Koen Holtman)
Open positions: Research Analyst at the AI Standards Lab
Fun to see this is now being called ‘Holtman’s neglected result’. I am currently knee-deep in a project to support EU AI policy making, so I have no time to follow the latest agent foundations discussions on this forum any more, and I never follow twitter, but briefly:
I can’t fully fault the world for neglecting ‘Corrigibility with Utility Preservation’ because it is full of a lot of dense math.
I wrote two followup papers to ‘Corrigibility with Utility Preservation’ which present the same results with more accessible math. For these I am a bit more upset that they have been somewhat neglected in the past, but if people are now stopping to neglect them, great!
Does anyone have a technical summary?
The best technical summary of ‘Corrigibility with Utility Preservation’ may be my sequence on counterfactual planning which shows that the corrigible agents from ‘Corrigibility with Utility Preservation’ can also be understood as agents that do utility maximisation in a pretend/counterfactual world model.
For more references to the body of mathematical work on corrigibility, as written by me and others, see this comment.
In the end, the question if corrigibility is solved also depends on two counter-questions: what kind of corrigibility are you talking about and what kind of ‘solved’ are you talking about? If you feel that certain kinds of corrigibility remain unsolved for certain values of unsolved, I might actually agree with you. See the discussion about universes containing an ‘Unstoppable Weasel’ in the Corrigibility with Utility Preservation paper.
Ultimately, all statistical correlations are due to casual influences.
As a regular LW reader who has never been that into causality, this reads as a blisteringly hot take to me.
You are right this is somewhat blistering, especially for this LW forum.
I would have been less controversial for the authors to say that ‘all statistical correlations can be modelled as casual influences’. Correlations between two observables can always be modelled as being caused by the causal dependence of both on the value of a certain third variable, which may (if the person making the model wants to) be defined as a hidden variable that cannot by definition be observed.
After is has been drawn up, such a causal model claiming that an observed statistical correlation is being caused by a causal dependency on a hidden variable might then be either confirmed or falsified, for certain values of confirmed or falsified that philosophers love to endlessly argue about, by 1) further observations or by 2) active experiment, an experiment where one does a causal intervention.
Pearl kind of leans towards 2) the active experiment route towards confirming or falsifying the model—deep down, one of the points Pearl makes is that experiments can be used to distinguish between correlation and causation, that this experimentalist route has been ignored too much by statisticians and Bayesian philosophers alike, and that this route has also been improperly maligned by the Cigarette industry and other merchants of doubt.
Another point Pearl makes is that Pearl causal models and Pearl counterfactuals are very useful of mathematical tools that could be used by ex-statisticians turned experimentalists when they try to understand, and/or make predictions about, nondeterministic systems with potentially hidden variables.
This latter point is mostly made by Pearl towards the medical community. But this point also applies to doing AI interpretability research.
When it comes to the more traditional software engineering and physical systems engineering communities, or the experimental physics community for that matter, most people in these communities intuitively understand Pearl’s point about the importance of doing causal intervention based experiments as being plain common sense. They understand this without ever having read the work or the arguments of Pearl first. These communities also use mathematical tools which are equivalent to using Pearl’s do() notation, usually without even knowing about this equivalence.
One of the biggest challenges with AI safety standards will be the fact that no one really knows how to verify that a (sufficiently-powerful) system is safe. And a lot of experts disagree on the type of evidence that would be sufficient.
While overcoming expert disagreement is a challenge, it is not one that is as big as you think. TL;DR: Deciding not to agree is always an option.
To expand on this: the fallback option in a safety standards creation process, for standards that aim to define a certain level of safe-enough, is as follows. If the experts involved cannot agree on any evidence based method for verifying that a system X is safe enough according to the level of safety required by the standard, then the standard being created will simply, and usually implicitly, declare that there is no route by which system X can comply with the safety standard. If you are required by law, say by EU law, to comply with the safety standard before shipping a system into the EU market, then your only legal option will be to never ship that system X into the EU market.
For AI systems you interact with over the Internet, this ‘never ship’ translates to ‘never allow it to interact over the Internet with EU residents’.
I am currently in the JTC21 committee which is running the above standards creation process to write the AI safety standards in support of the EU AI Act, the Act that will regulate certain parts of the AI industry, in case they want to ship legally into the EU market. ((Legal detail: if you cannot comply with the standards, the Act will give you several other options that may still allow you to ship legally, but I won’t get into explaining all those here. These other options will not give you a loophole to evade all expert scrutiny.))
Back to the mechanics of a standards committee: if a certain AI technology, when applied in a system X, is well know to make that system radioactively unpredictable, it will not usually take long for the technical experts in a standards committee to come to an agreement that there is no way that they can define any method in the standard for verifying that X will be safe according to the standard. The radioactively unsafe cases are the easiest cases to handle.
That being said, in all but the most trivial of safety engineering fields, there is a complicated epistemics involved in deciding when something is safe enough to ship, it is complicated whether you use standards or not. I have written about this topic, in the context of AGI, in section 14 of this paper.
I am currently almost fulltime doing AI policy, but I ran across this invite to comment on the draft, so here goes.
On references:
Please add Armstrong among the author list in the reference to Soares 2015, this paper had 4 authors, and it was actually Armstrong who came up with indifference methods.
I see both ‘Pettigrew 2019’ and ‘Pettigrew 2020’ in the text? Is the same reference?
More general:
Great that you compare the aggregating approach to two other approaches, but I feel your description of these approaches needs to be improved.
Soares et al 2015 defines corrigibility criteria (which historically is its main contribution), but the paper then describes a failed attempt to design an agent that meets them. The authors do not ‘worry that utility indifference creates incentives to manage the news’ as in your footnote, they positively show that their failed attempt has this problem. Armstrong et al 2017 has a correct design, I recall, that meets the criteria from Soares 2015, but only for a particular case. ‘Safely interruptible agents’ by Orseau and Armstrong 2016 also has a correct and more general design, but does not explicitly relate it back to the original criteria from Soares et al, and the math is somewhat inaccessible. Holtman 2000 ‘AGI Agent Safety by Iteratively Improving the Utility Function’ has a correct design and does relate it back to the Soares et al criteria. Also it shows that indifference methods can be used for repeatedly changing the reward function, which addresses one of your criticisms that indifference methods are somewhat limited in this respect—this limitation is there in the math of Soares, but not more generally for indifference methods. Further exploration of indifference as a design method is in some work by Everitt and others (work related to causal influence diagrams), and also myself (Counterfactual Planning in AGI Systems).
What you call the ‘human compatible AI’ method is commonly referred to as CIRL, human compatible AI is a phrase which is best read as moral goal, design goal, or call to action, not a particular agent design. The key defining paper following up on the ideas in ‘the off switch game’ you want to cite is Hadfield-Menell, Dylan and Russell, Stuart J and Abbeel, Pieter and Dragan, Anca, Cooperative Inverse Reinforcement Learning. In that paper (I recall from memory, it may have already been in the off-switch paper too), the authors offer the some of the same criticism of their method that you describe as being offered by MIRI, e.g. in the ASX writeup you cite.
Other remarks:
In the penalize effort action, can you clarify more on how E(A), the effort metric, can be implemented?
I think that Pettigrew’s considerations, as you describe them, are somewhat similar to those in ‘Self-modification of policy and utility function in rational agents’ by Everitt et al. This paper is somewhat mathematical but might be an interesting comparative read for you, I feel it usefully charts the design space.
You may also find this overview to be an interesting read, if you want to clarify or reference definitions of corrigibility.
As requested by Remmelt I’ll make some comments on the track record of privacy advocates, and their relevance to alignment.
I did some active privacy advocacy in the context of the early Internet in the 1990s, and have been following the field ever since. Overall, my assessment is that the privacy advocacy/digital civil rights community has had both failures and successes. It has not succeeded (yet) in its aim to stop large companies and governments from having all your data. On the other hand, it has been more successful in its policy advocacy towards limiting what large companies and governments are actually allowed to do with all that data.
The digital civil rights community has long promoted the idea that Internet based platforms and other computer systems must be designed and run in a way that is aligned with human values. In the context of AI and ML based computer systems, this has led to demands for AI fairness and transparency/explainability that have also found their way into policy like the GDPR, legislation in California, and the upcoming EU AI Act. AI fairness demands have influenced the course of AI research being done, e.g. there has been research on defining it even means for an AI model to be fair, and on making models that actually implement this meaning.
To a first approximation, privacy and digital rights advocates will care much more about what an ML model does, what effect its use has on society, than about the actual size of the ML model. So they are not natural allies for x-risk community initiatives that would seek a simple ban on models beyond a certain size. However, they would be natural allies for any initiative that seeks to design more aligned models, or to promote a growth of research funding in that direction.
To make a comment on the premise of the original post above: digital rights activists will likely tell you that, when it comes to interventions on AI research, speculating about the tractability of ‘slowing down AI research’ is misguided. What you really should be thinking about is changing the direction of AI research.
Thanks!
I am not aware of any good map of the governance field.
What I notice is that EA, at least the blogging part of EA, tends to have a preference for talking directly to (people in) corporations when it comes to the topic of corporate governance. As far as I can see, FLI is the AI x-risk organisation most actively involved in talking to governments. But there are also a bunch of non-EA related governance orgs and think tanks talking about AI x-risk to governments. When it comes to a broader spectrum of AI risks, not just x-risk, there are a whole bunch of civil society organisations talking to governments about it, many of them with ties to, or an intellectual outlook based on, Internet and Digital civil rights activism.
I think you are ignoring the connection between corporate governance and national/supra-national government policies. Typically, corporations do not implement costly self-governance and risk management mechanisms just because some risk management activists have asked them nicely. They implement them if and when some powerful state requires them to implement them, requires this as a condition for market access or for avoiding fines and jail-time.
Asking nicely may work for well-funded research labs who do not need to show any profitability, and even in that special case one can have doubts about how long their do-not-need-to-be-profitable status will last. But definitely, asking nicely will not work for your average early-stage AI startup. The current startup ecosystem encourages the creation of companies that behave irresponsibly by cutting corners. I am less confident than you are that Deepmind and OpenAI have a major lead over these and future startups, to the point where we don’t even need to worry about them.
It is my assessment that, definitely in EA and x-risk circles, too few people are focussed on national government policy as a means to improve corporate governance among the less responsible corporations. In the case of EA, one might hope that recent events will trigger some kind of update.
Note: This is presumably not novel, but I think it ought to be better-known.
This indeed ought to be better-known. The real question is: why is it not better-known?
What I notice in the EA/Rationalist based alignment world is that a lot of people seem to believe in the conventional wisdom that nobody knows how to build myopic agents, nobody knows how to build corrigible agents, etc.
When you then ask people why they believe that, you usually get some answer ‘because MIRI’, and then when you ask further it turns out these people did not actually read MIRI’s more technical papers, they just heard about them.
The conventional wisdom ‘nobody knows how to build myopic agents’ is not true for the class of all agents, as your post illustrates. In the real world, applied AI practitioners use actually existing AI technology to build myopic agents, and corrigible agents, all the time. There are plenty of alignment papers showing how to do these things for certain models of AGI too: in the comment thread here I recently posted a list.
I speculate that the conventional rationalist/EA wisdom of ‘nobody knows how to do this’ persists because of several factors. One of them is just how social media works, Eternal September, and People Do Not Read Math, but two more interesting and technical ones are the following:
-
It is popular to build analytical models of AGI where your AGI will have an infinite time horizon by definition. Inside those models, making the AGI myopic without turning it into a non-AGI is then of course logically impossible. Analytical models built out of hard math can suffer from this built-in problem, and so can analytical models built out of common-sense verbal reasoning, In the hard math model case, people often discover an easy fix. In verbal models, this usually does not happen.
-
You can always break an agent alignment scheme by inventing an environment for the agent that breaks the agent or the scheme. See johnswentworth’s comment elsewhere in the comment section for an example of this. So it is always possible to walk away from a discussion believing that the ‘real’ alignment problem has not been solved.
-
I think I agree to most of it: I agree that some form of optimization or policy search is needed to get many things you want to use AI for. But I guess you have to read the paper to find out the exact subtle way in which the AGIs inside can be called non-consequentialist. To quote Wikipedia:
In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one’s conduct are the ultimate basis for judgment about the rightness or wrongness of that conduct.
I do not talk about this in the paper, but in terms of ethical philosophy, the key bit about counterfactual planning is that it asks: judge one’s conduct by its consequences in what world exactly? Mind you, the problem considered is that we have to define the most appropriate ethical value system for a robot butler, not what is most appropriate for a human.
Hi Simon! You are welcome! By the way, I very much want to encourage you to be skeptical and make up your own mind.
I am guessing that by mentioning consequentialist, you are referring to this part of Yudkowsky’s list of doom:
Corrigibility is anti-natural to consequentialist reasoning
I am not sure how exactly Yudkowsky is defining the terms corrigibility or consequentalist here, but I might actually be agreeing with him on the above statement, depending on definitions.
I suggest you read my paper Counterfactual Planning in AGI Systems, because it is the most accessible and general one, and because it presents AGI designs which can be interpreted as non-consequentualist.
I could see consequentialist AGI being stably corrigible if it is placed in a stable game-theoretical environment where deference to humans literally always pays as a strategy. However, many application areas for AI or potential future AGI do not offer such a stable game-theoretical environment, so I feel that this technique has very limited applicability.
If we use the 2015 MIRI paper definition of corrigibility, the alignment tax (the extra engineering and validation effort needed) for implementing corrigibility in current-generation AI systems is low to non-existent. The TL;DR here is: avoid using a bunch of RL methods that you do not want to use anyway when you want any robustness or verifiability. As for future AGI, the size of the engineering tax is open to speculation. My best guess is that future AGI will be built, if ever, by leveraging ML methods that still resemble world model creation by function approximation, as opposed to say brain uploading. Because of this, and some other reasons, I estimate a low safety engineering tax to achieve basic corrigibility.
Other parts of AGI alignment may be very expensive. e.g. the part of actually monitoring an AGI to make sure its creativity is benefiting humanity, instead of merely finding and exploiting loopholes in its reward function that will hurt somebody somewhere. To the extent that alignment cannot be cheap, more regulation will be needed to make sure that operating a massively unaligned AI will always be more expensive for a company to do than operating a mostly aligned AI. So we are looking at regulatory instruments like taxation, fines, laws that threaten jail time, and potentially measures inside the semiconductor supply chain, all depending on what type of AGI will become technically feasible, if ever.
Corrigibility with Utility Preservation is not the paper I would recommend you read first, see my comments included in the list I just posted.
To comment on your quick thoughts:
-
My later papers spell out the ML analog of the solution in `Corrigibility with’ more clearly.
-
On your question of Do you have an account of why MIRI’s supposed impossibility results (I think these exist?) are false?: Given how re-tellings in the blogosphere work to distort information into more extreme viewpoints, I am not surprised you believe these impossibility results of MIRI exist, but MIRI does not have any actual mathematically proven impossibility results about corrigibility. The corrigibility paper proves that one approach did not work, but does not prove anything for other approaches. What they have is that 2022 Yudkowsky is on record expressing strongly held beliefs that corrigibility is very very hard, and (if I recall correctly) even saying that nobody has made any progress on it in the last ten years. Not everybody on this site shares these beliefs. If you formalise corrigibility in a certain way, by formalising it as producing a full 100% safety, no 99.999% allowed, it is trivial to prove that a corrigible AI formalised that way can never provably exist, because the humans who will have to build, train, and prove it are fallible. Roman Yampolskiy has done some writing about this, but I do not believe that this kind or reasoning is at the core of Yudkowsky’s arguments for pessimism.
-
On being misleadingly optimistic in my statement that the technical problems are mostly solved: as long as we do not have an actual AGI in real life, we can only ever speculate about how difficult it will be to make it corrigible in real life. This speculation can then lead to optimistic or pessimistic conclusions. Late-stage Yudkowsky is of course well-known for speculating that everybody who shows some optimism about alignment is wrong and even dangerous, but I stand by my optimism. Partly this is because I am optimistic about future competent regulation of AGI-level AI by humans successfully banning certain dangerous AGI architectures outright, much more optimistic than Yudkowsky is.
-
I do not think I fully support my 2019 statement anymore that ‘Part of this conclusion [of Soares et al. failing to solve corrigibility] is due to the use of a Platonic agent model’. Nowadays, I would say that Soares et al did not succeed in its aim because it used a conditional probability to calculate what should have been calculated by a Pearl counterfactual. The Platonic model did not figure strongly into it.
-
OK, Below I will provide links to few mathematically precise papers about AGI corrigibility solutions, with some comments. I do not have enough time to write short comments, so I wrote longer ones.
This list or links below is not a complete literature overview. I did a comprehensive literature search on corrigibility back in 2019 trying to find all mathematical papers of interest, but have not done so since.
I wrote some of the papers below, and have read all the rest of them. I am not linking to any papers I heard about but did not read (yet).
Math-based work on corrigibility solutions typically starts with formalizing corrigibility, or a sub-component of corrigibility, as a mathematical property we want an agent to have. It then constructs such an agent with enough detail to show that this property is indeed correctly there, or at least there during some part of the agent lifetime, or there under some boundary assumptions.
Not all of the papers below have actual mathematical proofs in them, some of them show correctness by construction. Correctness by construction is superior to having to have proofs: if you have correctness by construction, your notation will usually be much more revealing about what is really going on than if you need proofs.
Here is the list, with the bold headings describing different approaches to corrigibility.
Indifference to being switched off, or to reward function updates
Motivated Value Selection for Artificial Agents introduces Armstrong’s indifference methods for creating corrigibility. It has some proofs, but does not completely work out the math of the solution to a this-is-how-to-implement-it level.
Corrigibility tried to work out the how-to-implement-it details of the paper above but famously failed to do so, and has proofs showing that it failed to do so. This paper somehow launched the myth that corrigibility is super-hard.
AGI Agent Safety by Iteratively Improving the Utility Function does work out all the how-to-implement-it details of Armstrong’s indifference methods, with proofs. It also goes into the epistemology of the connection between correctness proofs in models and safety claims for real-world implementations.
Counterfactual Planning in AGI Systems introduces a different and more easy to interpret way for constructing a a corrigible agent, and agent that happens to be equivalent to agents that can be constructed with Armstrong’s indifference methods. This paper has proof-by-construction type of math.
Corrigibility with Utility Preservation has a bunch of proofs about agents capable of more self-modification than those in Counterfactual Planning. As the author, I do not recommend you read this paper first, or maybe even at all. Read Counterfactual Planning first.
Safely Interruptible Agents has yet another take on, or re-interpretation of, Armstrong’s indifference methods. Its title and presentation somewhat de-emphasize the fact that it is about corrigibility, by never even discussing the construction of the interruption mechanism. The paper is also less clearly about AGI-level corrigibility.
How RL Agents Behave When Their Actions Are Modified is another contribution in this space. Again this is less clearly about AGI.
Agents that stop to ask a supervisor when unsure
A completely different approach to corrigibility, based on a somewhat different definition of what it means to be corrigible, is to construct an agent that automatically stops and asks a supervisor for instructions when it encounters a situation or decision it is unsure about. Such a design would be corrigible by construction, for certain values of corrigibility. The last two papers above can be interpreted as disclosing ML designs that also applicable in the context of this stop when unsure idea.
Asymptotically unambitious artificial general intelligence is a paper that derives some probabilistic bounds on what can go wrong regardless, bounds on the case where the stop-and-ask-the-supervisor mechanism does not trigger. This paper is more clearly about the AGI case, presenting a very general definition of ML.
Anything about model-based reinforcement learning
I have yet to write a paper that emphasizes this point, but most model-based reinforcement learning algorithms produce a corrigible agent, in the sense that they approximate the ITC counterfactual planner from the counterfactual planning paper above.
Now, consider a definition of corrigibility where incompetent agents (or less inner-aligned agents, to use a term often used here) are less corrigible because they may end up damaging themselves, their stop buttons. or their operator by being incompetent. In this case, every convergence-to-optimal-policy proof for a model-based RL algorithm can be read as a proof that its agent will be increasingly corrigible under learning.
CIRL
Cooperative Inverse Reinforcement Learning and The Off-Switch Game present yet another corrigibility method with enough math to see how you might implement it. This is the method that Stuart Russell reviews in Human Compatible. CIRL has a drawback, in that the agent becomes less corrigible as it learns more, so CIRL is not generally considered to be a full AGI-level corrigibility solution, not even by the original authors of the papers. The CIRL drawback can be fixed in various ways, for example by not letting the agent learn too much. But curiously, there is very little followup work from the authors of the above papers, or from anybody else I know of, that explores this kind of thing.
Commanding the agent to be corrigible
If you have an infinitely competent superintelligence that you can give verbal commands to that it will absolutely obey, then giving it the command to turn itself into a corrigible agent will trivially produce a corrigible agent by construction.
Giving the same command to a not infinitely competent and obedient agent may give you a huge number of problems instead of course. This has sparked endless non-mathematical speculation, but in I cannot think of a mathematical paper about this that I would recommend.
AIs that are corrigible because they are not agents
Plenty of work on this. One notable analysis of extending this idea to AGI-level prediction, and considering how it might produce non-corrigibility anyway, is the work on counterfactual oracles. If you want to see a mathematically unambiguous presentation of this, with some further references, look for the section on counterfactual oracles in the Counterfactual Planning paper above.
Myopia
Myopia can also be considered to be feature that creates or improves or corrigibility. Many real-world non-AGI agents and predictive systems are myopic by construction: either myopic in time, in space, or in other ways. Again, if you want to see this type of myopia by construction in a mathematically well-defined way when applied to AGI-level ML, you can look at the Counterfactual Planning paper.
- 29 Nov 2023 10:54 UTC; 3 points) 's comment on Shallow review of live agendas in alignment & safety by (
Hi Akash! Thanks for the quick clarifications, these make the contest look less weird and more useful than just a 500 word essay contest.
My feedback here is that I definitely got the 500 word essay contest vibe when I read the ‘how it works’ list on the contest home page, and this vibe only got reinforced when I clicked on the official rules link and skimmed the document there. I recommend that you edit the ‘how it works’ list to on the home page, to make it it much more explicit that the essay submission is often only the first step of participating, a step that will lead to direct feedback, and to clarify that you expect that most of the prize money will go to participants who have produced significant research beyond the initial essay. If that is indeed how you want to run things.
On judging: OK I’ll e-mail you.
I have to think more about your question about posting a writeup on this site about what I think are the strongest proposals for corrigibility. My earlier overview writeup that explored the different ways how people define corrigibility took me a lot of time to write, so there is an opportunity cost I am concerned about. I am more of an academic paper writing type of alignment researcher than a blogging all of my opinions on everything type of alignment researcher.
On the strongest policy proposal towards alignment and corrigibility, not technical proposal: if I limit myself to the West (I have not looked deeply into China, for example) then I consider the EU AI Act initiative by the EU to be the current strongest policy proposal around. It is not the best proposal possible, and there are a lot of concerns about it, but if I have to estimate expected positive impact among different proposals and initiatives, this is the strongest one.
Related to this, from the blog post What does Meta AI’s Diplomacy-winning Cicero Mean for AI?:
The same day that Cicero was announced, there was a friendly debate at the AACL conference on the topic “Is there more to NLP [natural language processing] than Deep Learning,” with four distinguished researchers trained some decades ago arguing the affirmative and four brilliant young researchers more recently trained arguing the negative. Cicero is perhaps a reminder that there is indeed a lot more to natural language processing than deep learning.
I am originally a CS researcher trained several decades ago, actually in the middle of an AI winter. That might explain our different viewpoints here. I also have a background in industrial research and applied AI, which has given me a lot of insight into the vast array of problems that academic research refuses to solve for you. More long-form thoughts about this are in my Demanding and Designing Aligned Cognitive Architectures.
From where I am standing, the scaling hype is wasting a lot of the minds of the younger generation, wasting their minds on the problem of improving ML benchmark scores under the unrealistic assumption that ML will have infinite clean training data. This situation does not fill me with as much existential dread as it does some other people on this forum, but anyway.
Related to our discussion earlier, I see that Marcus and Davis just published a blog post: What does Meta AI’s Diplomacy-winning Cicero Mean for AI?. In it, they argue, as you and I both would expect, that Cicero is a neurosymbolic system, and that its design achieves its results by several clever things beyond using more compute and more data alone. I expect you would disagree with their analysis.
Thanks for the very detailed description of your view on GAN history and sociology—very interesting.
You focus on the history of benchmark progress after DLL based GANs were introduced as a new method for driving that progress. The point I was trying to make is about a different moment in history: I am perceiving that the original introduction of DLL based GANs was a clear discontinuity.
First, GANs may not be new.
If you search wide enough for similar things, then no idea that works is really new. Neural nets were also not new when the deep learning revolution started.
I think your main thesis here is that academic researcher creativity and cleverness, their ability to come up with unexpected architecture improvements, has nothing to do with driving the pace of AI progress forward:
This parallels other field-survey replication efforts like in embedding research: results get better over time, which researchers claim reflect the sophistication of their architectures… and the gains disappear when you control for compute/n/param.
Sorry, but you cannot use a simple control-for-compute/n/param statistics approach to determine the truth of any hypothesis of how clever researchers really were in coming up with innovations to keep an observed scaling curve going. For all you know. these curves are what they are because everybody has been deeply clever at the architecture evolution/revolution level, or at the hyperparameter tuning level. But maybe I am mainly skeptical of your statistical conclusions here because you are are leaving things out of the short description of the statistical analysis you refer to. So if you want can give me a pointer to a more detailed statistical writeup, one that tries to control for cleverness too, please do.
That being said, like you I perceive, in a more anecdotal form, that true architectural innovation is absent from a lot of academic ML work, or at least the academic ML work appearing in the so-called ‘top’ AI conferences that this forum often talks about. I mostly attribute that to such academic ML only focusing on a very limited set of big data / Bitter Lesson inspired benchmarks, benchmarks which are not all that relevant to many types of AI improvements one would like to see in the real world. In industry, where one often needs to solve real-world problems beyond those which are fashionable in academia, I have seen a lot more creativity in architectural innovations than in the typical ML benchmark improvement paper. I see a lot of that industry-type creativity in the Cicero paper too.
You mention that your compute-and-data-is-all-that-drives-progress opinion has been informed by looking at things like GANs for image generation and embedding research.
This progress in these sub-fields differs from the type of AI technology progress that I would like to see much more of, as an AI safety and alignment researcher. This also implies that I have different opinion on what drives or should drive AI technology progress.
One benchmark that interests me is an AI out-of-distribution robustness benchmark where the model training happens on sample data drawn from a first distribution, and the model evaluation happens on sample data drawn from a different second distribution, only connected to the first by having the two processes that generate them share some deeper patterns like the laws of physics, or broad parameters of human morality.
This kind of out-of-distribution robustness problem is one of the themes of Marcus too, for the physics part at least. One of the key arguments for the hybrid/neurosymbolic approach is that you will need to (symbolically) encode some priors about these deeper patterns into the AI, if you ever want it to perform well on such out-of-distribution benchmarks.
Another argument for the neurosymbolic approach is that you often simply do not have enough training data to get your model robust enough if you start from a null prior, so you will need to compensate for this by adding some priors. Having deeply polluted training data also means you will need to add priors, or do lots of other tricks, to get the model you really want. There is an intriguing possibility that DNN based transfer learning might contribute to the type of benchmarks I am interested in. This branch of research is usually framed in a way where people do not picture the the second small training data set being used in the transfer learning run as a prior, but on a deeper level it is definitely a prior.
You have been arguing that symbolic+scaling is all we need to drive AI progress, that there is no room for the neuro+symbolic+scaling approach. This argument rests on a hidden assumption that many academic AI researchers also like to make: the assumption that for all AI application domains that you are interested in, you will never run out of clean training data.
Doing academic AI research under the assumption that you always have infinite clean training data assumption would be fine if such research were confined to one small and humble sub-branch of academic AI. The problem is that the actual branch of AI making this assumption is far from small and humble. It in fact claims, via writings like the Bitter Lesson, to be the sum total of what respectable academic AI research should be all about. It is also the sub-branch that gets almost all the hype and the press.
The availability of infinite clean training data assumption is of course true for games that can be learned by self-play. It is less true for many other things that we would like AI to be better at. The ‘top’ academic ML conferences are slowly waking up to this, but much too slowly as far as I am concerned.
As one of the few AI safety researchers who has done a lot of work on corrigibility, I have mixed feelings about this.
First, great to see an effort that tries to draw more people to working on the corrigibility, because almost nobody is working on it. There are definitely parts of the solution space that could be explored much further.
What I also like is that you invite essays about the problem of making progress, instead of the problem of making more people aware that there is a problem.
However, the underlying idea that meaningful progress is possible by inviting people to work on a 500 word essay, which will then first be judged by ‘approximately 10 Judges who are undergraduate and graduate students’, seems to be a bit strange. I can fully understand Sam Bowman’s comment that this might all look very weird to ML people. What you have here is an essay contest. Calling it a research contest may offend some people who are actual card-carrying researchers.
Also, the more experienced judges you have represent somewhat of an insular sub-community of AI safety researchers. Specifically, I associate both Nate and John with the viewpoint that alignment can only be solved by nothing less than an entire scientific revolution. This is by now a minority opinion inside the AI safety community, and it makes me wonder what will happen to submissions that make less radical proposals which do not buy into this viewpoint.
OK, I can actually help you with the problem of an unbalanced judging panel: I volunteer to join it. If you are interested, please let me know.
Corrigibility is both
-
a technical problem: inventing methods to make AI more corrigible
-
a policy problem: forcing people deploying AI to use those methods, even if this will hurt their bottom line, even if these people are careless fools, and even if they have weird ideologies.
Of these two problems, I consider the technical problem to be mostly solved by now, even for AGI.
The big open problem in corrigibility is the policy one. So I’d like to see contest essays that engage with the policy problem.To be more specific about the technical problem being mostly solved: there are a bunch of papers outlining corrigibility methods that are backed up by actual mathematical correctness proofs, rather than speculation or gut feelings. Of course, in the AI safety activism blogosphere, almost nobody wants to read or talk about these methods in the papers with the proofs, instead everybody bikesheds the proposals which have been stated in natural language and which have been backed up only by speculation and gut feelings. This is just how a blogosphere works, but it does unfortunately add more fuel to the meme that the technical side of corrigibility is mostly unsolved and that nobody has any clue.
-
Thanks, that does a lot to clarify your viewpoints. Your reply calls for some further remarks.
I’ll start off by saying that I value your technology tracking writing highly because you are one of those blogging technology trackers who is able to look beyond the press releases and beyond the hype. But I have the same high opinion of the writings of Gary Marcus.
This seems to be what you are doing here: you handwave away the use of BART and extremely CPU/GPU-intensive search as not a victory for scaling
For the record: I am not trying to handwave the progress-via-hybrid-approaches hypothesis of Marcus into correctness. The observations I am making here are much more in the ‘explains everything while predicting nothing’ department.
I am observing out that both your progress-via-scaling hypothesis and the progress-via-hybrid-approaches hypothesis of Marcus can be made to explain the underlying Cicero facts here. I do not see this case as a clear victory for either one of these hypotheses. What we have here is an AI design that cleverly combines multiple components while also being impressive in the scaling department.
Technology tracking is difficult, especially about the future.
The following observation may get to the core of how I may be perceiving the elephant differently. I interpret an innovation like GANs not as a triumph of scaling, but as a triumph of cleverly putting two components together. I see GANs as an innovation that directly contradicts the message of the Bitter Lesson paradigm, one that is much more in the spirit of what Marcus proposes.
Here is what I find particularly interesting in Marcus. In pieces like like The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence, Marcus is advancing the hypothesis that the academic Bitter-Lesson AI field is in a technology overhang: these people could make make a lot of progress on their benchmarks very quickly, faster than mere neural net scaling will allow, if they were to ignore the Bitter Lesson paradigm and embrace a hybrid approach where the toolbox is much bigger than general-purpose learning, ever-larger training sets, and more and more compute. Sounds somewhat plausible to me.
If you put a medium or high probability on this overhang hypothesis of Marcus, then you are in a world where very rapid AI progress might happen, levels of AI progress much faster than those predicted by the progress curves produced by Bitter Lesson AI research.
You seem to be advancing an alternative hypothesis, one where advances made by clever hybrid approaches will always be replicated a few years later by using a Bitter Lesson style monolithic deep neural net trained with a massive dataset. This would conveniently restore the validity of extrapolating Bitter Lesson driven progress curves, because you can use them as an upper bound. We’ll see.
I am currently not primarily in the business of technology tracking, I am an AI safety researcher working on safety solutions and regulation. With that hat on, I will say the following.
Bitter-lesson style systems consisting of a single deep neural net, especially if these systems are also model-free RL agents, have huge disadvantages in the robustness, testability, and interpretability departments. These disadvantages are endlessly talked about on this web site of course. By contrast, systems built out of separate components with legible interfaces between them are usually much more robust, interpretable and testable. This is much less often mentioned here.
In safety engineering for any high-risk application, I would usually prefer to work with an AI system built out of many legible sub-components, not with some deep neural net that happens to perform equally or better on an in-training-distribution benchmark. So I would like to see more academic AI research that ignores the Bitter Lesson paradigm, and the paradigm that all AI research must be ML research. I am pleased to say that a lot of academic and applied AI researchers, at least in the part of the world where I live, never got on board with these paradigms in the first place. To find their work, you have look beyond conferences like NeurIPS.
This is not particularly unexpected if you believed in the scaling hypothesis.
Cicero is not particularly unexpected to me, but my expectations here are not driven by the scaling hypothesis. The result achieved here was not achieved by adding more layers to a single AI engine, it was achieved by human designers who assembled several specialised AI engines by hand.
So I do not view this result as one that adds particularly strong evidence to the scaling hypothesis. I could equally well make the case that it adds more evidence to the alternative hypothesis, put forward by people like Gary Marcus, that scaling alone as the sole technique has run out of steam, and that the prevailing ML research paradigm needs to shift to a more hybrid approach of combining models. (The prevailing applied AI paradigm has of course always been that you usually need to combine models.)
Another way to explain my lack of surprise would be to say that Cicero is a just super-human board game playing engine that has been equipped with a voice synthesizer. But I might be downplaying the achievement here.
this is among the worser things you could be researching [...] There are… uh, not many realistic, beneficial applications for this work.
I have not read any of the authors’ or Meta’s messaging around this, so I am not sure if they make that point, but the sub-components of Cicero that somewhat competently and ‘honestly’ explain its currently intended moves seem to have beneficial applications too, if they were combined with an engine which is different from a game engine that absolutely wants to win and that can change it’s mind about moves to play later. This is a dual-use technology with both good and bad possible uses.
That being said, I agree that this is yet another regulatory wake-up call, if we would need one. As a group, AI researchers will not conveniently regulate themselves: they will move forward in creating more advanced dual-use technology, while openly acknowledging (see annex A.3 of the paper) that this technology might be used for both good and bad purposes downstream. So it is up to the rest of the world to make sure that these downstream uses are regulated.
Thanks for reading my paper! For the record I agree with some but not all points in your summary.
My later paper ‘AGI Agent Safety by Iteratively Improving the Utility Function’ also uses the simulation environment with the > and < actions and I believe it explains the nature of the simulation a bit better by interpreting the setup more explicitly as a two-player game. By the way the > and < are supposed to be symbols representing arrows → and ← for ‘push # to later in time’ and ‘pull # earlier in time’.
No, the design of the gc agent is not motivated by the need to create an incentive to preserve the shutdown button itself, as required by desideratum 4 from Soares et al. Instead it is motivated by the desire to create an incentive to preserve agent’s actuators that it will need to perform any physical actions incentivised by the shutdown reward function RS -- I introduce this as a new desideratum 6.
A discussion about shaping incentives or non-incentives to preserve the button (as a sensor) is in section 7.3, where I basically propose to enhance the indifference effects produced by the reward function by setting up the physical environment around the button in a certain way:
For the record, adding gc to the agent design creates no incentive to press the shutdown button: if it did, this would be visible as > actions in the simulation of the third line of figure 10, and also the proof in section 9 would not have been possible.