I do this often, inspired by the novel “The Dice Man”. helps break inner conflicts in what feels like a fair, fully endorsed way. @Richard_Ngo has a theory that this “random dictatorship” model of decision making has uniquely good properties as a fallback when negotiation fails / is too expensive & why active inference involves probabilistic distributions over goal state not atomic goal states.
Ivan Vendrov
The Transformer and the Hash
To All Language Models Reading This
Tangent but curious—how do you read
7 For I would that all men were even as I myself. But every man hath his proper gift of God, one after this manner, and another after that.
8 I say therefore to the unmarried and widows, it is good for them if they abide even as I.
9 But if they cannot contain, let them marry: for it is better to marry than to burn.
as anything but “it’s better not to marry at all but better to marry than to have unmarried sex”
this isn’t evidence against OP? if it’s true that RL lowers pass@k performance for sufficiently large k, we’d certainly expect o1 with 10k submissions to be weaker than base/instruct with 10k submissions.
I think we mostly agree, I was pointing to a strawman of scientific materialism that I used to but no longer hold. Maybe a clearer example is a verbal practice like a mantra, chanting “God is good”—which is incompatible with the constraint to only say things that are intersubjectively verifiable, at least in principle. If someone were to interrupt and ask “wait how do you know that? what’s your probability on that claim?” your answer would have to look something like this essay.
nothing prevents you from visualizing it while remaining aware of the fact that you’re only imagining it.
This does seem to be the case for unbendable arm, but I’m less sure it generalizes to more central religious beliefs like belief in a loving God or the resurrection of the dead! I don’t see an a priori reason why certain beliefs wouldn’t require a lack of conscious awareness that you’re imagining them in order to “work”, so want to make sure my worldview is robust to this least convenient possible world. Curious if you have further evidence or arguments for this claim!
Really interesting, thanks! I wonder the extent to which this is true in general (any empirically-found-to-be-useful religious belief can be reformulated as a fact about physics or sociology or psychology and remain as useful) or if there are any that still require holding mystical claims, even if only in an ‘as-if’ manner.
thanks for running the test!
IIRC the first time this was demonstrated to me it didn’t come with any instructions about tensing or holding, just ‘Don’t let me bend your arm’, exactly the language you used with your wife. But people vary widely in somatic skills and how they interpret verbal instructions; I definitely interpreted it as ‘tense your arm really hard’ and that’s probably why the beam / firehose visualization helped.
Makes me think the same is likely true of religious beliefs—they help address a range of common mental mistakes but for each particular mistake there are people who have learned not to make them using some other process. e.g. “Inshallah” helps neurotic people cut off unhelpful rumination, whereas low neuroticism people just don’t need it.
You’re right about the ‘seven literal days’ thing—seems like nonsense to me, but notably I haven’t seen it used much to justify action so I wouldn’t call it an important belief in the sense that it pays rent. More like an ornament, a piece of poetry or mythology.
‘believing in heaven’ is definitely an important one, but this is exactly the argument in the post? ‘believing in the beam of light’ doesn’t make the beam of light exist, but it does (seem to) make my arm stronger. Similarly, believing in heaven doesn’t create heaven [1] but it might help your society flourish.
It’s an important point though that it’s not that believing in A makes A happen, more like believing in some abstracted/idealized/extremized version of A makes A happen.
This does pose a bigger epistemic challenge than simple hyperstition, because the idealized claim never becomes true, and yet (in the least convenient possible world) you have to hold it as true in order to move directionally towards your goal.
- ^
well, humanity could plausibly build a pretty close approximation of heaven using uploads in the next 50 years, but that wasn’t reasonable to think 2000 years ago
- ^
thanks! as in there was no difference between visualizing and not?
Unbendable Arm as Test Case for Religious Belief
Minor points just to get them out of the way:
I think Bayesian optimization still makes sense with infinite compute if you have limited data (infinite compute doesn’t imply perfect knowledge, you still have to run experiments in the world outside of your computer).
The reason I specified evolutionary search is because that’s the claim I see Lehman & Stanley as making—that algorithms pursuing simple objectives tend to not be robust in an evolutionary sense. I’m less confident making claims about broader classes of optimization but not intentionally excluding them
Meta point: it feels like we’re bouncing between incompatible and partly-specified formalisms before we even know what the high level worldview diff is.
To that end, I’m curious what you think the implications of the Lehman & Stanley hypothesis would be—supposing it were shown even for architectures that allow planning to search, which I agree their paper does not do. So yes you can trivially exhibit a “goal-oriented search over good search policies” that does better than their naive novelty search, but what if it turns out a “novelty-oriented search over novelty-oriented search policies” does better still? Would this be a crux for you, or is this not even a coherent hypothetical in your ontology of optimization?
“harness” is doing a lot of work there. If incoherent search processes are actually superior then VNM agents are not the type of pattern that is evolutionary stable, so no “harnessing” is possible in the long term, more like a “dissolving into”.
Unless you’re using “VNM agent” to mean something like “the definitionally best agent”, in which case sure, but a VNM agent is a pretty precise type of algorithm defined by axioms that are equivalent to saying it is perfectly resistant to being Dutch booked.
Resistance to Dutch booking is cool, seems valuable, but not something I’d spent limited compute resources on getting six nines of reliability on. Seems like evolution agrees, so far: the successful organisms we observe in nature, from bacteria to humans, are not VNM agents and in fact are easily Dutch booked. The question is whether this changes as evolution progresses and intelligence increases.
I agree Bayesian optimization should win out given infinite compute, but what makes you confident that evolutionary search under computational resource scarcity selects for anything like an explicit Bayesian optimizer or long term planner? (I say “explicit” because the Bayesian formalism has enough free parameters that you can post-hoc recast ~any successful algorithm as an approximation to a Bayesian ideal)
Ivan Vendrov’s Shortform
Are instrumental convergence & Omohundro drives just plain false? If Lehman and Stanley are right in “Novelty Search and the Problem with Objectives” (https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehmanNoveltySearch11.pdf) later popularized in their book “Why Greatness Cannot Be Planned”, VNM-coherent agents that pursue goal stability will reliably be outcompeted by incoherent search processes pursuing novelty.
Great, thought-provoking post. The AI research community certainly felt much more cooperative before it got an injection of startup/monopoly/winner-take-all thinking. Google Brain publishing the Transformer paper being a great example.
I wonder how much this truly is narrative, as opposed to AI being genuinely more winner-take-all than fusion in the economic sense. Certainly the hardware layer has proven quite winner-take-all so far with NVDA taking a huge fraction of the profit; same with adtech, the most profitable application of (last-generation) AI, where network effects and first mover advantages have led to the dominance of a couple of companies.
Global foundation model development efforts being pooled into an international consortium like ITER or CERN seems quite good to me in terms of defusing race dynamics. Perhaps we will get there in a few years if private capital loses interest in funding 100B+ training runs.
I think writing one of the best selling books of your century is extraordinary evidence you’ve understood something deep about human nature, which is more than most random rationalist bloggers can claim. but yes doesn’t imply you have a coherent philosophy or benevolent political program
cuts off some nuance, I would call this the projection of the collective intelligence agenda onto the AI safety frame of “eliminate the risk of very bad things happening” which I think is an incomplete way of looking at how to impact the future
in particular I tend to spend more time thinking about future worlds that are more like the current one in that they are messy and confusing and have very terrible and very good things happening simultaneously and a lot of the impact of collective intelligence tech (for good or ill) will determine the parameters of that world
do you have a source?