I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last fourteen years I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.

# JustinShovelain

# Information-Theoretic Boxing of Superintelligences

# The risk-reward tradeoff of interpretability research

Complementary ideas to this article:

https://www.lesswrong.com/posts/BfKQGYJBwdHfik4Kd/fai-research-constraints-and-agi-side-effects: (the origin for the fuel tank metaphor Raemon refers to in these comments)

Extending things further to handle higher order derivatives and putting things within a cohesive space: https://forum.effectivealtruism.org/posts/TCxik4KvTgGzMowP9/state-space-of-x-risk-trajectories

A typology for mapping downside risks: https://www.lesswrong.com/posts/RY9XYoqPeMc8W8zbH/mapping-downside-risks-and-information-hazards

A set of potential responses for what to do with potentially dangerous developments and a heuristic for triggering that evaluation: https://www.lesswrong.com/posts/6ur8vDX6ApAXrRN3t/information-hazards-why-you-should-care-and-what-you-can-do

A general heuristic for what technology to develop and how to distribute it: https://forum.effectivealtruism.org/posts/4oGYbvcy2SRHTWgWk/improving-the-future-by-influencing-actors-benevolence

A coherence focused framework from which is more fundamental than the link just above and from which it can be derived: https://www.lesswrong.com/posts/AtwPwD6PBsqfpCsHE/aligning-ai-by-optimizing-for-wisdom

# Aligning AI by optimizing for “wisdom”

# Improving the safety of AI evals

# Keep humans in the loop

Relatedly, here is a post going beyond the framework of a ratio of progress to the effect on the ratio of research that still needs to be done for various outcomes: https://www.lesswrong.com/posts/BfKQGYJBwdHfik4Kd/fai-research-constraints-and-agi-side-effects

Extending further one can examine higher order derivatives and curvature in a space of existential risk trajectories: https://forum.effectivealtruism.org/posts/TCxik4KvTgGzMowP9/state-space-of-x-risk-trajectories

Roughly speaking, in terms of the actions you take, various timelines should be weighted as P(AGI in year t)*DifferenceYouCanProduceInAGIAlignmentAt(t). This produces a new, non normalized distribution of how much to prioritize each time (you can renormalize it if you wish to make it more like “probability”).

Note that this is just a first approximation and there are additional subtleties.

This assumes you are optimizing for each time and possible world orthogonality but much of the time optimizing for nearby times is very similar to optimizing for a particular time.

The definition of “you” here depends on the nature of the decision maker which can vary between a group, a person, or even a person at a particular moment.

Using different definitions of “you” between decision makers can cause a coordination issue where different people are trying to save different potential worlds (because of their different skills and ability to produce change) and their plans may tangle with each other.

It is difficult to figure out how much of a difference you can produce in different possible worlds and times. You do the best you can but you might suffer a failure of imagination in either finding ways your plans wont work, ways your plans will have larger positive effects, or ways you may in the future improve your plans. For more on the difference one can produce see this and this.

Lastly, there is a risk here psychologically and socially of fudging the calculations above to make things more comfortable.

(Meta: I may make a full post on this someday and use this reasoning often)

# Updating Utility Functions

I think causal diagrams naturally emerge when thinking about Goodhart’s law and its implications.

I came up with the concept of Goodhart’s law causal graphs above because of a presentation someone gave at the EA Hotel in late 2019 of Scott’s Goodhart Taxonomy. I thought causal diagrams were a clearer way to describe some parts of the taxonomy but their relationship to the taxonomy is complex. I also just encountered the paper you and Scott wrote a couple weeks ago when getting ready to write this Good Heart Week prompted post, and I was planning in the next post to reference it when we address “causal stomping” and “function generalization error” and can more comprehensively describe the relationship with the paper.

In terms of the relationship to the paper, I think that the Goodhart’s law causal graphs I describe above are more fundamental and atomically describe the relationship types between the target and proxies in a unified way. I read how you were using causal diagrams in your paper as rather describing various ways causal graph relationships may be broken by taking action rather than simply describing relationships between proxies and targets and ways they may be confused with each other (which is the function of the Goodhart’s law causal graphs above).

Mostly the purpose of this post and the next are to present an alternative, and I think cleaner, ontological structure for thinking about Goodhart’s law though there will still be some messiness in carving up reality.

As to your suggested mitigations, both randomization and secret metric are good to add though I’m not as sure about post hoc. Thanks for the suggestions and the surrounding paper.

# Goodhart’s Law Causal Diagrams

# How Money Fails to Track Value

# Evaluating expertise: a clear box model

# Good and bad ways to think about downside risks

I like the distinction that you’re making and that you gave it a clear name.

Relatedly, there is the method of Lagrangian multipliers for solving things in the subspace.

On a side note: there is a way to partially unify subspace optimum and local optimum by saying that the subspace optimum is a local optimum with respect to the local set of parameters you’re using to define the subspace. You’re at a local optimum with respect to defining the underlying space to optimize over (aka the subspace) and a local optimum within that space (the subspace). (Relatedly, moduli spaces.)

I’ve decided to try modelling testing and contact tracing over the weekend. If you wish to join and want to ping me my contact details are in the doc.

# COVID-19: An opportunity to help by modelling testing and tracing to inform the UK government

I think virus inactivation is a normal vaccination approach and is probably being pursued here? The hardest part is probably growing it in vitro at scale and perhaps ensuring that all of them are inactive.

Gotcha. What determines the “ratios” is some sort of underlying causal structure of which some aspects can be summarized by a tech tree. For thinking about the causal structure you may also like this post: https://forum.effectivealtruism.org/posts/TfRexamDYBqSwg7er/causal-diagrams-of-the-paths-to-existential-catastrophe