I agree but I would not frame this as review in terms of thumbs up/thumbs down—we can do better. In economics, for example, most people post their research in a fairly polished format online long before it makes it through the journal peer-review process. People can host their work in a variety of interesting and useful formats that actually go beyond what you can put in a frozen PDF of course.
Then we can have continuous public evaluation of this work, both crowdsourced and managed—at unjournal.org we do the latter, we pay experts to write detailed reports explaining the strengths, weaknesses, credibility, and usefulness of the research, and to give a benchmarked quantitative rating of this both overall and across a range of categories, as well as claim assessment. You can see our output at unjournal.pubpub.org and on our ratings dashboard—https://unjournal.shinyapps.io/uj-dashboard/
Authors can continue to improve the research and extend it in the same place and then seek an updated evaluation and rating .
david reinstein
Can we do useful meta-analysis? Unjournal evaluations of “Meaningfully reducing consumption of meat… is an unsolved problem...”
Naturally this paper is several years old. But it still seems like the most prominent work on this, with 61 citations etc.
My own take: we need more work in this area… perhaps follow-up work doing a similar survey, taking sample selection and question design more seriously.I hope we can identify & evaluate such work in a timely fashion.
E.g., there is some overlap with
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5021463which “focuses on measures to mitigate systemic risks associated with general-purpose AI models, rather than addressing the AGI scenario considered in this paper”.
I’m eager to hear other suggestions for relevant work to consider and evaluate.
My own take on this is that this suggests we need more work in this area. We need a follow-up with some follow-up work doing a similar survey, taking sample selection and question design more seriously.
Representative quotes from the evaluations
Sampling bias/selection issues
Potential sampling bias – particularly over-representation of safety-minded respondents
There’s a risk that the sample reflects the views of those already more inclined to endorse stringent safety norms. This is particularly important in light of the sample composition, as over 40% of respondents are affiliated with AGI labs
Question selection/design, need for sharper questions
As the authors note, this [agreement] may be partly due to the high-level and generally uncontroversial framing of the statements (e.g., “AGI labs should conduct pre-deployment risk assessments”). But in their current form, the items mostly capture agreement in principle, rather than forcing respondents to grapple with the kinds of tradeoffs that real-world governance inevitably entails.
For example, would respondents still support red teaming or third-party audits if they significantly delayed product releases?[Emphasis added]
the paper states that the selected practices are extracted from (1) current practices at individual AGI labs and (2) planned practices at individual labs, among other sources.
… These results might suggest a selection bias where statements selected from labs practices are agreed on by labs themselves,
[suggestion to] introduce an inclusion/exclusion criterion to provide a better justification as to why some statements are selected.
Overstated claims for consensus?
[Paper] “findings suggest that AGI labs need to improve their risk management practices. In particular, there seems to be room for improvement when it comes to their risk governance.”
While one can agree with such claim, it is difficult to see how this conclusion can be reached from the paper’s results.
By the way, just flagging that The Unjournal did an evaluation of this (post/discussion here—I’ll extend this with some more opinionated comments now). Overall it was taken to be a strong step but with important limitations and need for further caveats and further work.
By “this is now the canonical collection” do you mean the ideas surveyed in the paper? Do you think it’s still canonical or is it now ~out-of-date?
Unjournal evaluation of “Towards best practices in AGI safety & governance” (2023), quick take
GPT-5 is out
Unjournal’s first Pivotal Question, focusing on the viability of cultured meat — This post also gives concrete details of our process and proposed approach.
The Unjournal’s “Pivotal Questions” project
I was indeed looking for something that could be used in a live conversation.
Is there a version of this bot (or something similar) that one can use in an LLM model or website? I want to use this on a podcast without having to link this to a Slack
I only realised the latter when I saw the Dutch word for this “middellandse zee”. The sea in the middle of the lands.
“Terranean” had never scanned separately to me
Related; when you never realized a compound word had a literal meaning....
Cup board—board to put cups on—cupboard
Medi terrain—between two continents—Mediterranean
Etc.
I think the gut thing is usually metaphorical though
(How) does this proposal enable single-blind peer review?
For ratings or metrics for the credibility of the research, I could imagine likes/reposts, etc., but could this enableRating along multiple dimensions
Rating intensity (e.g., strong positive, weak positive, etc.)
Experts/highly rated people to have more weight in the rating (if people want this)
‘Chat with impactful research & evaluations’ (Unjournal NotebookLMs)
On economics, michaba03m recommends Mankiw’s Macroeconomics over Varian’s Intermediate Microeconomics and Katz & Rosen’s Macroeconomics.
On economics, realitygrill recommends McAfee’s Introduction to Economic Analysis over Mankiw’s Principles of Microeconomics and Case & Fair’s Principles of Macroeconomics.
Microeconomics and macroeconomics are different subjects and have different content. Why are they grouped together?
I think saying “I am not going to answer that because…” would not necessarily feel like taking a hit to the debater/interviewee. Could also bring scrutiny and pressure to moderators/interviewers to ask fair and relevant questions.
I think people would appreciate the directness. And maybe come to understand the nature of inquiry and truth a tiny bit better.
I asked some LLM models/agents to consider this post, in preparation for considering this for some form of Unjournal.org evaluation. FWIW:
1. Conversation started with GPTPro
2. RoastmyPoast.org “epistemic audit” (result: C+, 68⁄100, which is a bit below average iirc)
3. Claude 4.5 Opus
My take: they saw this post as plausible, no major errors, and generating some useful insights, but with some important limitations, and the main claims are not all ‘obviously demonstrated’
Below. Some overall syntheses/pulled quotes that seemed relevant to me. All folded content is LLM
GPTPro
What holds up (probability ~0.6–0.8):
Public evidence supports that inference budgets buy big gains on current reasoning benchmarks, and RL post-training scaling appears meaningfully less compute-efficient (often by ~2 extra decades to cover similar 20→80 improvements).
RL post-training is now plausibly reaching “pretraining-scale” at least at xAI (and maybe elsewhere soon), so “RL is no longer a trivially cheap add-on” is real.
What’s uncertain / overconfident (probability ~0.2–0.5)
The specific conversion “100× training ≈ 1,000× inference” as a general rule, and thus the specific “1,000,000× RL for a GPT-level jump.” This rests on a non-robust mapping and then exponentiates it.
The implication that we’re “near the effective limit” of RL training gains, given recent public RL-scaling work emphasizing recipe dependence and improved efficiency/asymptotes.
…[Verdict] Ord is on solid ground that current reasoning improvements rely heavily on inference budgets … he is on weak ground when he turns that into a near-term “end of scaling” claim via a brittle 1,000,000× extrapolation.
I asked what aspects were missing in the comments on LW and EA Forum; it noted a lack of discussion of …
lack of discussion of …
how sensitive the RL-vs-inference scaling gap is to model size, data quality, reuse, training recipe, domain/task type;
how recent empirical RLHF / RL‑post‑training research (on open, small-scale, or controlled setups) might affect that gap;
the analogy of “inefficiency gap = fundamental ceiling” vs. “inefficiency may be engineering‑level problem, solvable with better algorithms/research”;
the degree of uncertainty involved in extrapolating over many orders of magnitude;
the possibility that RL‑post-training inefficiency might be significantly reduced in the future (with better methodology).
So in short: the public conversation has touched some of the major “skeptical” themes, but not with the depth, technical framing, or caution that a more expert‑oriented review might use.how sensitive the RL-vs-inference scaling gap is to model size, data quality, reuse, training recipe, domain/task type;
how recent empirical RLHF / RL‑post‑training research (on open, small-scale, or controlled setups) might affect that gap;
the analogy of “inefficiency gap = fundamental ceiling” vs. “inefficiency may be engineering‑level problem, solvable with better algorithms/research”;
the degree of uncertainty involved in extrapolating over many orders of magnitude;
the possibility that RL‑post-training inefficiency might be significantly reduced in the future (with better methodology).
So in short: the public conversation has touched some of the major “skeptical” themes, but not with the depth, technical framing, or caution that a more expert‑oriented review might use
Claude 4.5 Opus
RoastMyPoast Epistemic Audit (C+, 68⁄100)
Uses agents and claude-sonnet-4-5-20250929
Noted overconfidence
Noted “Overconfidence” about
And “Single points of failure”:
RE Unjournal.org potentially commissioning this for an evaluation of some form, we might consider
Is this post highly influential on its own (are funders and labs using this to guide important policy choices)?
Is there further expertise we could unlock that is not reflected in these comments? (The LLMs suggested some evaluators, but we sometimes find it hard to get people to accept the assignment and follow through)
Is there a more formal research output that covers this same ground, coming from ML researchers, scaling experts, etc.?