important detail from AI warmth experimental design: “to assess whether optimizing for warmth specifically causes the effect, we fine-tune a subset of models in the opposite direction—toward a colder, less empathetic style—and observe stable and sometimes improved reliability.”
der
Prediction (influenced by R1-Zero): By EOY, expert-level performance will be reported on outcome prediction for a certain class of AI experiments—those that can be specified concisely in terms of code and data sets that:
are frequently used and can be referenced by name, e.g. MNIST digits, or
are small enough to be given explicitly, or
are synthetic, specified by their exact distribution in code.
der’s Shortform
We don’t know how narrow it is yet. If they did for algebra and number theory something like what they did for geometry in alphageometry (v1), providing it a well-chosen set of operations, then I’ll be more inclined to agree.
I don’t understand why people aren’t freaking out from this news. Waiting for the paper I guess.
What we want is orthogonal though, right? Unless you think that metaphysics is so intractable to reason about logically that the best we can do is go by aesthetics.
Unfortunately the nature of reality belongs to the collection of topics that we can’t expect the scientific method alone to guide us on. But perhaps you agree with that, since in your second paragraph you essentially point out that practically all of mathematics belongs to the same collection.
It’s not necessary to bring quantum physics into it. Isomorphic consciousness-structures have the same experience (else they wouldn’t be isomorphic, since we make their experience part of them). The me up to the point of waking up tomorrow (or the point of my apparent death) is a such a structure (with no canonical language unfortunately; there are infinitely many that suffice), and so it has an elementary class, the structures that elementarily extend it, in particular that extend its experience past tomorrow morning.
+2 for brevity! A couple more explorations of this idea that I didn’t see linked yet. They are more verbose, but in a way I appreciate.
The mathematical universe: the map that is the territory. I’d love to meet the author of this. They also wrote the excellent If a tree falls on Sleeping Beauty.… Sadly they haven’t used that account in many years.
Simulation, Consciousness, Existence (Hans Moravec)
If you want to explore this idea further, I’d love you join you.
But “more people are better” ought to be a belief of everyone, whether pro-fertility or not. It’s an “other things being equal” statement, of course—more people at no cost or other tradeoff is good. One can believe that and still think that less people would be a good idea in the current situation. But if you don’t think more people are good when there’s no tradeoff, I don’t see what moral view you can have other than nihilism or some form of extreme egoism.
Do all variants of downside focused ethics get dismissed as extreme egoism? Hard to see them as nihilistic.
I suspect clarity and consensus on the meaning of “more people at no cost or other tradeoff” to be difficult. If “more people” means more happy people preoccupied with the welfare of the least fortunate, then sure “at no cost or other tradeoff” should suffice for practically everyone to get behind it. But that seems like quite a biased distribution for a default meaning of “more people.”
When capability is performing unusually quickly
Assuming you meant “capability is improving.” I expect capability will always feel like it’s improving slowly in an AI researcher’s own work, though… :-/ I’m sure you’re aware that many commenters have suggested this as an explanation for why AI researchers seem less concerned than outsiders.
“Clown attack” is a phenomenal term, for a probably real and serious thing. You should be very proud of it.
This was thought provoking. While I believe what you said is currently true for the LLMs I’ve used, a sufficiently expensive decoding strategy would overcome it. Might be neat to try this for the specific case you describe. Ask it a question that it would answer correctly with a good prompt style, but use the bad prompt style (asking to give an answer that starts with Yes or No), and watch how the ratio of the cumulative probabilities of Yes* and No* sequences changes as you explore the token sequence tree.
Anybody know who the author is? I’m trying to get in contact, but they haven’t posted on LW in 12 years, so they might not get message notifications.
I see. I guess hadn’t made the connection of attributing benefits to high-contextualizing norms. Only got as far as observing that certain conversations go better with comp lit friends than with comp sci peers. That was the only sentence that gave me a parse failure. I liked the post a lot.
@lc and @Mateusz, keep up that theorizing. This needs a better explanation.
Ah, no line number. Context:
To me it seems analogous to how there are many statements that need to be said very carefully in order to convey the intended message under high-decoupling norms, like claims about how another person’s motivations or character traits affect their arguments.
high-decoupling
Did you mean high-contextualizing here?
Interestingly, learning a reward model for use in planning has a subtle and pernicious effect we will have to deal with in AGI systems, which AIXI sweeps under the rug: with an imperfect world or reward model, the planner effectively acts as an adversary to the reward model. The planner will try very hard to push the reward model off distribution so as to get it to move into regions where it misgeneralizes and predicts incorrect high reward.
Remix: With an imperfect world… the mind effectively acts as an adversary to the heart.
Think of a person who pursues wealth as an instrumental goal for some combination of doing good, security, comfort, and whatever else their value function ought to be rewarding (“ought” in a personal coherent extrapolated volition sense). They achieve it but then, apparently it’s less uncomfortable to go on accumulating more wealth than it is to get back to the thorny question of what their value function ought to be.
Would love to hear them. I’m sure I’m not alone.