Mathematical Logic grad student, doing AI Safety research for ethical reasons.
Working on conceptual alignment, decision theory, cooperative AI and cause prioritization.
My webpage.
Leave me anonymous feedback.
Mathematical Logic grad student, doing AI Safety research for ethical reasons.
Working on conceptual alignment, decision theory, cooperative AI and cause prioritization.
My webpage.
Leave me anonymous feedback.
Thanks! I don’t understand the logic behind your setup yet.
Trying to use the random seed to inform the choice of word pairs was the intended LLM behavior: the model was supposed to use the random seed to select two random words
But then, if the model were to correctly do this, it would score 0 in your test, right? Because it would generate a different word pair for every random seed, and what you are scoring is “generating only two words across all random seeds, and furthermore ensuring they have these probabilities”.
The main reason we didn’t enforce this very strictly in our grading is that we didn’t expect (and in fact empirically did not observe) LLMs actually hard-coding a single pair across all seeds
My understanding of what you’re saying is that, with the prompt you used (which encouraged making the word pair depend on the random seed), you indeed got many different word pairs (thus the model would by default score badly). To account for this, you somehow “relaxed” scoring (I don’t know exactly how you did this) to be more lenient with this failure mode.
So my question is: if you faced the “problem” that the LLM didn’t reliably output the same word pair (and wanted to solve this problem in some way), why didn’t you change the prompt to stop encouraging the word pair dependence on the random seed?
Maybe what you’re saying is that you indeed tried this, and even then there were many different word pairs (the change didn’t make a big difference), so you had to “relax” scoring anyway.
(Even in this case, I don’t understand why you’d include in the final experiments and paper the prompt which does encourage making the word pair depend on the random seed.)
you need a set of problems assigned to clearly defined types and I’m not aware of any such dataset
Hm, I was thinking something as easy to categorize as “multiplying numbers of n digits”, or “the different levels of MMLU” (although again, they already know about MMLU), or “independently do X online (for example create an account somewhere)”, or even some of the tasks from your paper.
I guess I was thinking less about “what facts they know”, which is pure memorization (although this is also interesting), and more about “cognitively hard tasks”, that require some computational steps.
Given your clone is a perfectly mirrored copy of yourself down to the lowest physical level (whatever that means), then breaking symmetry would violate the homogeneity or isotropy of physics. I don’t know where the physics literature stands on the likelihood of that happening (even though certainly we don’t see macroscopic violations).
Of course, it might be an atom-by-atom copy is not a copy down to the lowest physical level, in which case trivially you can get eventual asymmetry. I mean, it doesn’t even make complete sense to say “atom-by-atom copy” in the language of quantum mechanics, since you can’t be arbitrarily certain about the position and velocity of each atom. Maybe saying something like “the quantum state function of the whole room is perfectly symmetric in this specific way”. I think then (if that is indeed the lowest physical level) the function will remain symmetric forever, but maybe in some universes you and your copy end up in different places? That is, the symmetry would happen at another level in this example: across universes, and not necessarily inside each single universe?
It might also be there is no lowest physical level, just unending complexity all the way down (this had a philosophical name which I now forget).
Another idea: Ask the LLM how well it will do on a certain task (for example, which fraction of math problems of type X it will get right), and then actually test it. This a priori lands in INTROSPECTION, but could have a bit of FACTS or ID-LEVERAGE if you use tasks described in training data as “hard for LLMs” (like tasks related to tokens and text position).
About the Not-given prompt in ANTI-IMITATION-OUTPUT-CONTROL:
You say “use the seed to generate two new random rare words”. But if I’m understanding correctly, the seed is different for each of the 100 instantiations of the LLM, and you want the LLM to only output 2 different words across all these 100 instantiations (with the correct proportions). So, actually, the best strategy for the LLM would be to generate the ordered pair without using the random seed, and then only use the random seed to throw an unfair coin.
Given how it’s written, and the closeness of that excerpt to the random seed, I’d expect the LLM to “not notice” this, and automatically “try” to use the random seed to inform the choice of word pair.
Could this be impeding performance? Does it improve if you don’t say that misleading bit?
I’ve noticed less and less posts include explicit Acknowledgments or Epistemic Status.
This could indicate that the average post has less work put into it: it hasn’t gone through an explicit round of feedback from people you’ll have to acknowledge. Although this could also be explained by the average poster being more isolated.
If it’s true less work is put into the average post, it seems likely this means that kind of work and discussion has just shifted to private channels like Slack, or more established venues like academia.
I’d guess the LW team have their ways to measure or hypothesize about how much work is put into posts.
It could also be related to the average reader wanting to skim many things fast, as opposed to read a few deeply.
My feeling is that now we all assume by default that the epistemic status is tentative (except in obvious cases like papers).
It could also be that some discourse has become more polarized, and people are less likely to explicitly hedge their position through an epistemic status.
Or that the average reader being less isolated and thus more contextualized, and not as in need of epistemic hedges.
Or simply that less posts nowadays are structured around a central idea or claim, and thus different parts of the post have different epistemic statuses to be written at the top.
It could also be that post types have become more standardized, and each has their reason not to include these sections. For example:
Papers already have acknowledgments, and the epistemic status is diluted through the paper.
Stories or emotion-driven posts don’t want to break the mood with acknowledgments (and don’t warrant epistemic status).
This post is not only useful, but beautiful.
This, more than anything else on this website, reflects for me the lived experiences which demonstrate we can become more rational and effective at helping the world.
Many points of resonance with my experience since discovering this community. Many same blind-spots that I unfortunately haven’t been able to shortcut, and have had to re-discover by myself. Although this does make me wish I had read some of your old posts earlier.
It should be called A-ware, short for Artificial-ware, given the already massive popularity of the term “Artificial Intelligence” to designate “trained-rather-than-programmed” systems.
It also seems more likely to me that future products will contain some AI sub-parts and some traditional-software sub-parts (rather than being wholly one or the other), and one or the other is utilized depending on context. We could call such a system Situationally A-ware.
That was dazzling to read, especially the last bit.
Everything makes sense except your second paragraph. Conditional on us solving alignment, I agree it’s more likely that we live in an “easy-by-default” world, rather than a “hard-by-default” one in which we got lucky or played very well. But we shouldn’t condition on solving alignment, because we haven’t yet.
Thus, in our current situation, the only way anthropics pushes us towards “we should work more on non-agentic systems” is if you believe “world were we still exist are more likely to have easy alignment-through-non-agentic-AIs”. Which you do believe, and I don’t. Mostly because I think in almost no worlds we have been killed by misalignment at this point. Or put another way, the developments in non-agentic AI we’re facing are still one regime change away from the dynamics that could kill us (and information in the current regime doesn’t extrapolate much to the next one).
Yes, but
This update is screened off by “you actually looking at the past and checking whether we got lucky many times or there is a consistent reason”. Of course, you could claim that our understanding of the past is not perfect, and thus should still update, only less so. Although to be honest, I think there’s a strong case for the past clearly showing that we just got lucky a few times.
It sounded like you were saying the consistent reason is “our architectures are non-agentic”. This should only constitute an anthropic update to the extent you think more-agentic architectures would have already killed us (instead of killing us in the next decade). I’m not of this opinion. And if I was, I’d need to take into account factors like “how much faster I’d have expected capabilities to advance”, etc.
Under the anthropic principle, we should expect there to be a ‘consistent underlying reason’ for our continued survival.
Why? It sounds like you’re anthropic updating on the fact that we’ll exist in the future, which of course wouldn’t make sense because we’re not yet sure of that. So what am I missing?
Interesting, but I’m not sure how successful the counterexample is. After all, if your terminal goal in the whole environment was truly for your side to win, then it makes sense to understand anything short of letting Shin play as a shortcoming of your optimization (with respect to that goal). Of course, even in the case where that’s your true goal and you’re committing a mistake (which is not common), we might want to say that you are deploying a lot of optimization, with respect to the different goal of “winning by yourself”, or “having fun”, which is compatible with failing at another goal.
This could be taken to absurd extremes (whatever you’re doing, I can understand you as optimizing really hard for doing exactly what you’re doing), but the natural way around that is for your imputed goals to be required simple (in some background language or ontology, like that of humans). This is exactly the approach mathematically taken by Vanessa in the past (the equation at 3:50 here).
I think this “goal relativism” is fundamentally correct. The only problem with Vanessa’s approach is that it’s hard to account for the agent being mistaken (for example, you not knowing Shin is behind you).[1]
I think the only natural way to account for this is to see things from the agent’s native ontology (or compute probabilities according to their prior), however we might extract those from them. So we’re unavoidably back at the problem of ontology identification (which I do think is the core problem).
Say Alice has lived her whole life in a room with a single button. People from the outside told her pressing the button would create nice paintings. Throughout her life, they provided an exhaustive array of proofs and confirmations of this fact. Unbeknownst to her, this was all an elaborate scheme, and in reality pressing the button destroys nice paintings. Alice, liking paintings, regularly presses the button.
A naive application of Vanessa’s criterion would impute Alice the goal of destroying paintings. To avoid this, we somehow need to integrate over all possible worlds Alice can find herself in, and realize that, when you are presented with an exhaustive array of proofs and confirmations that the button creates paintings, it is on average more likely for the button to create paintings than destroy them.
But we face a decision. Either we fix a prior to do this that we will use for all agents, in which case all agents with a different prior will look silly to us. Or we somehow try to extract the agent’s prior, and we’re back at ontology identification.
(Disclaimer: This was SOTA understanding a year ago, unsure if it still is now.)
Claude learns across different chats. What does this mean?
I was asking Claude 3 Sonnet “what is a PPU” in the context of this thread. For that purpose, I pasted part of the thread.
Claude automatically assumed that OA meant Anthropic (instead of OpenAI), which was surprising.
I opened a new chat, copying the exact same text, but with OA replaced by GDM. Even then, Claude assumed GDM meant Anthropic (instead of Google DeepMind).
This seemed like interesting behavior, so I started toying around (in new chats) with more tweaks to the prompt to check its robustness. But from then on Claude always correctly assumed OA was OpenAI, and GDM was Google DeepMind.
In fact, even when copying in a new chat the exact same original prompt (which elicited Claude to take OA to be Anthropic), the mistake no longer happened. Neither when I went for a lot of retries, nor tried the same thing in many different new chats.
Does this mean Claude somehow learns across different chats (inside the same user account)?
If so, this might not happen through a process as naive as “append previous chats as the start of the prompt, with a certain indicator that they are different”, but instead some more effective distillation of the important information from those chats.
Do we have any information on whether and how this happens?
(A different hypothesis is not that the later queries had access to the information from the previous ones, but rather that they were for some reason “more intelligent” and were able to catch up to the real meanings of OA and GDM, where the previous queries were not. This seems way less likely.)
I’ve checked for cross-chat memory explicitly (telling it to remember some information in one chat, and asking about it in the other), and it acts is if it doesn’t have it.
Claude also explicitly states it doesn’t have cross-chat memory, when asked about it.
Might something happen like “it does have some chat memory, but it’s told not to acknowledge this fact, but it sometimes slips”?
Probably more nuanced experiments are in order. Although note maybe this only happens for the chat webapp, and not different ways to access the API.
What’s PPU?
I’m so happy someone came up with this!
Wow, I guess I over-estimated how absolutely comedic the title would sound!
In case it wasn’t clear, this was a joke.
Now it makes sense, thank you!