this post aims to keep track of posts relating to the question-answer counterfactual interval proposal for AI alignment, abbreviated “QACI” and pronounced “quashy”. i’ll keep it updated to reflect the state of the research.
this research is primarily published on the Orthogonal website and discussed on the Orthogonal discord.
as a top-level view of QACI, you might want to start with:
the set of all posts relevant to QACI includes:
as overviews of QACI and how it’s going:
on the formal alignment perspective within which it fits:
on the blob location problem:
on QACI as an implementation of long reflection / CEV:
on formalizing the QACI formal goal:
on how a formally aligned AI would actually run over time:
on the metaethics grounding QACI:
on my view of the AI alignment research field within which i’m doing formal alignment:
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?