Is there a solution to avoid constraining the norms of the columns of to be 1? Anthropic report better results when letting it be unconstrained. I’ve tried not constraining it and allowing it to vary which actually gives a slight speedup in performance. This also allows me to avoid an awkward backward hook. Perhaps most of the shrinking effect gets absorbed by the term?
J Bostock
I agree with this point when it comes to technical discussions. I would like to add the caveat that when talking to a total amateur, the sentence:
AI is like biorisk more than it is like than ordinary tech, therefore we need stricter safety regulations and limits on what people can create at all.
Is the fastest way I’ve found to transmit information. Maybe 30% of the entire AI risk case can be delivered in the first four words.
I’d be most interested in detecting hydroperoxides, which is easier than detecting trans fats. I don’t know how soluble a lipid hydroperoxide is in hexane, but isopropanol-hexane mixtures are often used for lipid extracts and would probably work better.
Evaporation could probably be done relatively safely by just leaving the extract at room temperature (I would definitely not advise heating the mixture at all) but you’d need good ventilation, preferably an outdoor space.
I think commercial LCMS/GCMS services are generally available to people in the USA/UK, and these would probably be the gold standard for detecting various hydroperoxides. I wouldn’t trust IR spectroscopy to distinguish the hydroperoxides from other OH-group containing contaminants when you’re working with a system as complicated as a box of french fries.
As far as I’m aware nobody claims trans fats aren’t bad.
See comment by Gilch, allegedly Vaccenic acid isn’t harmful. The particular trans-fats produced by isomerization of oleic and linoleic acid, however, probably are harmful. Elaidic acid for example is a major trans-fat component in margarines, which were banned.
Yeah i was unaware of vaccenic acid. I’ve edited the post to clarify.
I’ve also realized that it might explain the anomalous (i.e. after adjusting for confounders) effects of living at higher altitude. The lower the atmospheric pressure, the less oxygen available to oxidize the PUFAs. Of course some foods will be imported already full of oxidized FAs and that will be too late, but presumably a McDonalds deep fryer in Colorado Springs is producing less PUFAs/hour than a correspondingly-hot one in San Francisco.
This feels too crazy to put in the original post but it’s certainly interesting.
That post is part of what spurred this one
I uhh, didn’t see that. Odd coincidence! I’ve added a link and will consider what added value I can bring from my perspective.
Thanks for the feedback. There’s a condition which I assumed when writing this which I have realized is much stronger than I originally thought, and I think I should’ve devoted more time to thinking about its implications.
When I mentioned “no information being lost”, what I meant is that in the interaction , each value (where is the domain of ) corresponds to only one value of . In terms of FFS, this means that each variable must be the maximally fine partition of the base set which is possible with that variable’s set of factors.
Under these conditions, I am pretty sure that
I was thinking about causality in terms of forced directional arrows in Bayes nets, rather than in terms of d-separation. I don’t think your example as written is helpful because Bayes nets rely on the independence of variables to do causal inference: is equivalent to .
It’s more important to think about cases like where causality can be inferred. If we change this to by adding noise then we still get a distribution satisfying (as and are still independent).
Even if we did have other nodes forcing (such as a node which is parent to , and another node which is parent to ), then I still don’t think adding noise lets us swap the orders round.
On the other hand, there are certainly issues in Bayes nets of more elements, particularly the “diamond-shaped” net with arrows . Here adding noise does prevent effective temporal inference, since, if and are no longer d-separated by , we cannot prove from correlations alone that no information goes between them through .
I had forgotten about OEIS! Anyway Ithink the actual number might be 1577 rather than 1617 (this also gives no answers). I was only assuming agnosticism over factors in the overlap region if all pairs had factors, but I think that is missing some examples. My current guess is that any overlap region like should be agnostic iff all of the overlap regions “surrounding” it in the Venn diagram (, , , ) in this situation either have a factor present or agnostic. This gives the series 1, 2, 15, 1577, 3397521 (my computer has not spat out the next element). This also gives nothing on the OEIS.
My reasoning for this condition is that we should be able to “remove” an observable from the system without trouble. If we have an agnosticism, in the intersection , then we can only remove observable if this doesn’t cause trouble for the new intersection , which is only true if we already have an factor in (or are agnostic about it).
I know very, very little about category theory, but some of this work regarding natural latents seem to absolutely smack of it. There seems to be a fairly important three-way relationship between causal models, finite factored sets, and Bayes nets.
To be precise, any causal model consisting of root sets , downstream sets , and functions mapping sets to downstream sets like must, when equipped with a set of independent probability distributions over B, create a joint probability distribution compatible with the Bayes net that’s isomorphic to the causal model in the obvious way. (So in the previous example, there would be arrows from only , , and to ) The proof of this seems almost trivial but I don’t trust myself not to balls it up somehow when working with probability theory notation.
In the resulting Bayes net, one “minimal” natural latent which conditionally separates and is just the probabilities over just the root elements from which both and depend on. It might be possible to show that this “minimal” construction of satisfies a universal property, and so other which is also “minimal” in this way must be isomorphic to .
I think the position of the ball is in V, since the players are responding to the position of the ball by forcing it towards the goal. It’s difficult to predict the long-term position of the ball based on where it is now. The position of the opponent’s goal would be an example of something in U for both teams. In this case both team’s utility-functions contain a robust pointer to the goal’s position.
I’d go for:
Reinforcement learning agents do two sorts of planning. One is the application of the dynamic (world-modelling) network and using a Monte Carlo tree search (or something like it) over explicitly-represented world states. The other is implicit in the future-reward-estimate function. You need to have as much planning as possible be of the first type:
It’s much more supervisable. An explicitly-represented world state is more interrogable than the inner workings of a future-reward-estimate.
It’s less susceptible to value-leaking. By this I mean issues in alignment which arise from instrumentally-valuable (i.e. not directly part of the reward function) goals leaking into the future-reward-estimate.
You can also turn down the depth on the tree search. If the agent literally can’t plan beyond a dozen steps ahead it can’t be deceptively aligned.
I would question the framing of mental subagents as “mesa optimizers” here. This sneaks in an important assumption: namely that they are optimizing anything. I think the general view of “humans are made of a bunch of different subsystems which use common symbols to talk to one another” has some merit, but I think this post ascribes a lot more agency to these subsystems than I would. I view most of the subagents of human minds as mechanistically relatively simple.
For example, I might reframe a lot of the elements of talking about the unattainable “object of desire” in the following way:
1. Human minds have a reward system which rewards thinking about “good” things we don’t have (or else we couldn’t ever do things)
2. Human thoughts ping from one concept to adjacent concepts
3. Thoughts of good things associate to assessment of our current state
4. Thoughts of our current state being lacking cause a negative emotional response
5. The reward signal fails to backpropagate to the reward system in 1 enough, so the thoughts of “good” things we don’t have are reinforced
6. The cycle continuesI don’t think this is literally the reason, but framings on this level seem more mechanistic to me.
I also think that any framings along the lines of “you are lying to yourself all the way down and cannot help it” and “literally everyone is messed in some fundamental way and there are no humans who can function in satisfying way” are just kind of bad. Seems like a Kafka trap to me.
I’ve spoken elsewhere about the human perception of ourselves as a coherent entity being a misfiring of systems which model others as coherent entities (for evolutionary reasons), I don’t particularly think some sort of societal pressure is the primary reason for our thinking of ourselves as being coherent, although societal pressure is certainly to blame for the instinct to repress certain desires.
I’m interested in the “Xi will be assassinated/otherwise killed if he doesn’t secure this bid for presidency” perspective. Even if he was put in a position where he’d lose the bid for a third term, is it likely that he’d be killed for stepping down? The four previous paramount leaders weren’t. Is the argument that he’s amassed too much power/done too much evil/burned too many bridges in getting his level of power?
Although I think most people who amass Xi’s level of power are best modelled as desiring power (or at least as executing patterns which have in the past maximized power) for its own sake, so I guess the question of threat to his life is somewhat moot with regards to policy.
Seems like there’s a potential solution to ELK-like problems. If you can force the information to move from the AI’s ontology to (it’s model of) a human’s ontology and then force it to move it back again.
This gets around “basic” deception since we can always compare the AI’s ontology before and after the translation.
The question is how do we force the knowledge to go through the (modeled) human’s ontology, and how do we know the forward and backward translators aren’t behaving badly in some way.
Unmentioned but large comparative advantage of this: it’s not based in the Bay Area.
The typical alignment pitch of: “Come and work on this super-difficult problem you may or may not be well suited for at all” Is a hard enough sell for already-successful people (which intelligent people often are) without adding: “Also you have to move to this one specific area of California which has a bit of a housing and crime problem and very particular culture”
I was referring to “values” more like the second case. Consider the choice blindness experiments (which are well-replicated). People think they value certain things in a partner, or politics, but really it’s just a bias to model themselves as being more agentic than they actually are.
I’ve found that too. Taking log(L0) and log(MSE) both seem reasonable to me, but it feels weird to me to take log(DownstreamLoss) for cross-entropy losses, since that’s already log-ish. In my case the plots were generally worse to look at than the ones I showed above when scanning over a very broad range of L1 coefficients (and therefore L0 values).