It does seem likely that this is less legible by default, although we’d need to look at complete examples of how the sequence changes across time to get a clear sense. Unfortunately I can’t see any in the paper.
mattmacdermott
Wait, is it obvious that they are worse for faithfulness than the normal CoT approach?
Yes, the outputs are no longer produced sequentially over the sequence dimension, so we definitely don’t have causal faithfulness along the sequence dimension.
But the outputs are produced sequentially along the time dimension. Plausibly by monitoring along that dimension we can get a window into the model’s thinking.
What’s more, I think you actually guarantee strict causal faithfulness in the diffusion case, unlike in the normal autoregressive paradigm. The reason being that in the normal paradigm there is a causal path from the inputs to the final outputs via the model’s activations which doesn’t go through the intermediate produced tokens. Whereas (if I understand correctly)[1] with diffusion models we throw away the model’s activations after each timestep and just feed in the token sequence. So the only way for the model’s thinking at earlier timesteps to affect later timesteps is via the token sequence. Any of the standard measures of chain-of-thought faithfulness that look like corrupting the intermediate reasoning steps and seeing whether the final answer changes will in fact say that the diffusion model is faithful (if applied along the time dimension).
Of course, this causal notion of faithfulness is necessary but not sufficient for what we really care about, since the intermediate outputs could still fail to make the model’s thinking comprehensible to us. Is there strong reason to think the diffusion setup is worse in that sense?
ETA: maybe “diffusion models + paraphrasing the sequence after each timestep” is promising for getting both causal faithfulness and non-encoded reasoning? By default this probably breaks the diffusion model, but perhaps it could be made to work.- ^
Based on skimming this paper and assuming it’s representative.
- ^
An interesting development in the time since your shortform was written is that we can now try these ideas out without too much effort via Manifold.
Anyone know of any examples?
I think the first question to think about is how to use them to make CDT decisions. You can create a market about a causal effect if you have control over the decision and you can randomise it to break any correlations with the rest of the world, assuming the fact that you’re going to randomise it doesn’t otherwise affect the outcome (or bettors don’t think it will).
Committing to doing that does render the market useless for choosing policy, but you could randomly decide whether to randomise or to make the decision via whatever the process you actually want to use, and have the market be conditional on the former. You probably don’t want to be randomising your policy decisions too often, but if liquidity wasn’t an issue you could set the probability of randomisation arbitrarily low.
Then FDT… I dunno, seems hard.
Sometimes the point is specifically to not update on the additional information, because you don’t trust yourself to update on it correctly.
Classic example: “Projects like this usually take 6 months, but looking at the plan I don’t see why it couldn’t be done in 2… wait, no, I should stick to the reference class forecast.”
Can we define something’s function in terms of the selection pressure it’s going to face in the future?
Consider the case where the part of my brain that lights up for horses lights up for a cow at night. Do I have a broken horse detector or a working horse-or-cow-at-night detector?
I don’t really see how the past-selection-pressure account of function can be used to say that this brain region is malfunctioning when it lights up for cows at night. Lighting up for cows at night might never have been selected against (let’s assume no-one in my evolutionary history has ever seen a cow at night). And lighting up for this particular cow at night has been selected for just as much as lighting up for any particular horse has (i.e. it’s come about as a side effect of the fact that it usefully lit up for some different animals in the past).
But maybe the future-selection-pressure account could handle this? Suppose the fact that the region lit up for this cow causes me to die young, so future people become a bit more likely to have it just light up for horses. Or suppose it causes me to tell my friend I saw a horse, and he says, “That was a cow; you can tell from the horns,” and from then on it stops lighting up for cows at night. In either of these cases we can say that the behaviour of that brain region was going to be selected against, and so it was malfunctioning.
Whereas if lighting up for a cows-at-night as well as horses is selected for going forward, then I’m happy to say it’s a horse-or-cow-at-night detector even if it causes me to call a cow a horse sometimes.
I think it may or may not diverge from meaningful natural language in the next couple of years, and importantly I think we’ll be able to roughly tell whether it has. So I think we should just see (although finding other formats for interpretable autogression could be good too).
just not do gradient descent on the internal chain of thought, then its just a worse scratchpad.
This seems like a misunderstanding. When OpenAI and others talk about not optimising the chain of thought, they mean not optimising it for looking nice. That still means optimising it for its contribution to the final answer i.e. for being the best scratchpad it can be (that’s the whole paradigm).
If what you mean is you can’t be that confident given disagreement, I dunno, I wish I could have that much faith in people.
In another way, being that confident despite disagreement requires faith in people — yourself and the others who agree with you.
I think one reason I have a much lower p(doom) than some people is that although I think the AI safety community is great, I don’t have that much more faith in its epistemics than everyone else’s.
Circular Consequentialism
When I was a kid I used to love playing RuneScape. One day I had what seemed like a deep insight. Why did I want to kill enemies and complete quests? In order to level up and get better equipment. Why did I want to level up and get better equipment? In order to kill enemies and complete quests. It all seemed a bit empty and circular. I don’t think I stopped playing RuneScape after that, but I would think about it every now and again and it would give me pause. In hindsight, my motivations weren’t really circular — I was playing Runescape in order to have fun, and the rest was just instrumental to that.
But my question now is — is a true circular consequentialist a stable agent that can exist in the world? An agent that wants X only because it leads to Y, and wants Y only because it leads to X?
Note, I don’t think this is the same as a circular preferences situation. The agent isn’t swapping X for Y and then Y for X over and over again, ready to get money pumped by some clever observer. It’s getting more and more X, and more and more Y over time.
Obviously if it terminally cares about both X and Y or cares about them both instrumentally for some other purpose, a normal consequentialist could display this behaviour. But do you even need terminal goals here? Can you have an agent that only cares about X instrumentally for its effect on Y, and only cares about Y instrumentally for its effect on X. In order for this to be different to just caring about X and Y terminally, I think a necessary property is that the only path through which the agent is trying to increase Y via is X, and the only path through which it’s trying to increase X via is Y.
Maybe you could spell this out a bit more? What concretely do you mean when you say that anything that outputs decisions implies a utility function — are you thinking of a certain mathematical result/procedure?
anything that outputs decisions implies a utility function
I think this is only true in a boring sense and isn’t true in more natural senses. For example, in an MDP, it’s not true that every policy maximises a non-constant utility function over states.
I think this generalises too much from ChatGPT, and also reads to much into ChatGPT’s nature from the experiment, but it’s a small piece of evidence.
I think you’ve hidden most of the difficulty in this line. If we knew how to make a consequentialist sub-agent that was acting “in service” of the outer loop, then we could probably use the same technique to make a Task-based AGI acting “in service” of us.
Later I might try to flesh out my currently-very-loose picture of why consequentialism-in-service-of-virtues seems like a plausible thing we could end up with. I’m not sure whether it implies that you should be able to make a task-based AGI.
Obvious nitpick: It’s just “gain as much power as is helpful for achieving whatever my goals are”. I think maybe you think instrumental convergence has stronger power-seeking implications than it does. It only has strong implications when the task is very difficult.
Fair enough. Talk of instrumental convergence usually assumes that the amount of power that is helpful will be a lot (otherwise it wouldn’t be scary). But I suppose you’d say that’s just because we expect to try to use AIs for very difficult tasks. (Later you mention unboundedness too, which I think should be added to difficulty here).
it’s likely that some of the tasks will be difficult and unbounded
I’m not sure about that, because the fact that the task is being completed in service of some virtue might limit the scope of actions that are considered for it. Again I think it’s on me to paint a more detailed picture of the way the agent works and how it comes about in order for us to be able to think that through.
I think this gets at the heart of the question (but doesn’t consider the other possible answer). Does a powerful virtue-driven agent optimise hard now for its ability to embody that virtue in the future? Or does it just kinda chill and embody the virtue now, sacrificing some of its ability to embody it extra-hard in the future?
I guess both are conceivable, so perhaps I do need to give an argument why we might expect some kind of virtue-driven AI in the first place, and see which kind that argument suggests.
Is instrumental convergence a thing for virtue-driven agents?
If you have the time look up “Terence Tao” on Gwern’s website.
In case anyone else is going looking, here is the relevant account of Tao as a child and here is a screenshot of the most relevant part:
Why does ChatGPT voice-to-text keep translating me into Welsh?
I use ChatGPT voice-to-text all the time. About 1% of the time, the message I record in English gets seemingly-roughly-correctly translated into Welsh, and ChatGPT replies in Welsh. Sometimes my messages go back to English on the next message, and sometimes they stay in Welsh for a while. Has anyone else experienced this?
Example: https://chatgpt.com/share/67e1f11e-4624-800a-b9cd-70dee98c6d4e
I think the commenter is asking something a bit different—about the distribution of tasks rather than the success rate. My variant of this question: is your set of tasks supposed to be an unbiased sample of the tasks a knowledge worker faces, so that if I see a 50% success rate on 1 hour long tasks I can read it as a 50% success rate on average across all of the tasks any knowledge worker faces?
Or is it more impressive than that because the tasks are selected to be moderately interesting, or less impressive because they’re selected to be measurable, etc
You can train transformers as diffusion models (example paper), and that’s presumably what Gemini diffusion is.