Feedback welcomed: www.admonymous.co/zeshen
I sometimes write my thoughts here: airisks.substack.com
Feedback welcomed: www.admonymous.co/zeshen
I sometimes write my thoughts here: airisks.substack.com
Just wanted to register that I’m on the side of Freddie’s bet for all items below except the one on the BLS (because some categories are small enough some that losing 50% jobs on at least one category probably isn’t that unlikely).
Here’s what he says:
For me to win the wager, all of the following must be true on Feb 14, 2029:
Labor Market:
The U.S. unemployment rate is equal to or lower than 18%
Labor force participation rate, ages 25-54, is equal to or greater than 68%
No single BLS occupational category will have lost 50% or more of jobs between now and February 14th 2029
Economic Growth & Productivity:
U.S. GDP is within −30% to +35% of February 2026 levels (inflation-adjusted)
Nonfarm labor productivity growth has not exceeded 8% in any individual year or 20% for the three-year period
Prices & Markets:
The S&P 500 is within −60% to +225% of the February 2026 level
CPI inflation averaged over 3 years is between −2% and +18% annually
Corporate & Structural:
The Fortune 500 median profit margin is between 2% and 35%
The largest 5 companies don’t account for more than 65% of the total S&P 500 market cap
White Collar & Knowledge Workers:
“Professional and Business Services” employment, as defined by the Bureau of Labor Statistics, has not declined by more than 35% from February 2026
Combined employment in software developers, accountants, lawyers, consultants, and writers, as defined by the Bureau of Labor Statistics, has not declined by more than 45%
Median wage for “computer and mathematical occupations,” as defined by the Bureau of Labor Statistics, is not more than 60% lower in real terms than in February 2026
The college wage premium (median earnings of bachelor’s degree holders vs high school only) has not fallen below 30%
Inequality:
The Gini coefficient is less than 0.60
The top 1%’s income share is less than 35%
The top 0.1% wealth share is less than 30%
Median household income has not fallen by more than 40% relative to mean household income
Thanks for writing this up! I also want to register that I agree with all of this, maybe except for the part where AIs can’t tell novel funny jokes—I expect this to be relatively easy. But of coursre it depends on the definition of ‘novel’.
I struggled to do this exercise myself because when I looked at AI as a normal technology I felt like I basically agree with most of their thinking, but it was also hard to find concrete differences between their predictions and AI2027 at least in the near term. For example, for things like “LLMs are broadly acknowledged to be plateauing”, it’s probably going to be concurrently both true and false in a way that’s hard to resolve—a lot of people may complain that it’s plateauing but the benchmark scores and the usage stats could show otherwise.
Yeah, at least “literally everyone dies” has a concrete ending even though it doesn’t have concrete intermediate concrete steps. Gradual disempowerment seem less concrete on both the ending and the intermediate steps, so it becomes even less action-relevant.
…But I’m not sure that actual existing efforts towards delaying AGI are helping.
But perhaps actual existing efforts to hype up LLMs are helping? I am sympathetic to François Chollet’s position:
OpenAI basically set back progress towards AGI by quite a few years probably like five to 10 years for two reasons. They caused this complete closing down of Frontier research publishing but also they triggered this initial burst of hype around LLMs and now LLMs have sucked the oxygen out of the room.
This happened all the time at my line of work. Forecasts become targets and you become responsible for meeting them. So whenever I was asked to provide a forecast, I will either i) ask as many questions as I need to know the exact purpose of the request, and produce a forecast that meets exactly that intent, or ii) pick a forecast and provide it, but first list down all the assumptions and caveats behind the forecast that I can possibly think of. With time, I’d also get a sense of who I need to be extra careful with when providing any forecasts because of all sorts of ways that might backfire.
Agreed. I’m also pleasantly surprised that your take isn’t heavily downvoted.
There’ll be discussions about how these systems will eventually become dangerous, and safety-concerned groups might even set up testing protocols (“safety evals”).
My impression is that safety evals were deemed irrelevant because a powerful enough AGI, being deceptively aligned, would pass all of them anyway. We didn’t expect the first general-ish AIs to be so dumb, like how GPT-4 was being so blatant and explicit about lying to the TaskRabbit worker.
Scott Alexander talked about explicit honesty (unfortunately paywalled) in contrast with radical honesty. In short, explicit honesty is being completely honest when asked, and radical honesty is being completely honest even without being asked. From what I understand from your post, it feels like deep honesty is about being completely honest about information you perceive to be relevant to the receiver, regardless of whether the information is explicitly being requested.
Scott also links to some cases where radical honesty did not work out well, like this, this, and this. I suspect deep honesty may lead to similar risks, as you have already pointed out.
And with regards to:
“what is kind, true, and useful?”
I think they would form a 3-circle venn diagram. Things that are within the intersection of all three circles would be a no-brainer. But the tricky bits are the things that are either true but not kind/useful, or kind/useful but not true. And I understood this post as a suggestion to venture more into the former.
Can’t people decide simply not to build AGI/ASI?
Yeah, many people, like the majority of users on this forum, have decided to not build AGI. On the other hand, other people have decided to build AGI and are working hard towards it.
Side note: LessWrong has a feature to post posts as Questions, you might want to use it for questions in the future.
Definitely. Also, my incorrect and exaggerated model of the community is likely based on the minority who have a tendency of expressing those comments publicly, against people who might even genuinely deserve those comments.
I agree with RL agents being misaligned by default, even more so for the non-imitation-learned ones. I mean, even LLMs trained on human-generated data are misaligned by default, regardless of what definition of ‘alignment’ is being used. But even with misalignment by default, I’m just less convinced that their capabilities would grow fast enough to be able to cause an existential catastrophe in the near-term, if we use LLM capability improvement trends as a reference.
Thanks for this post. This is generally how I feel as well, but my (exaggerated) model of the AI aligment community would immediately attack me by saying “if you don’t find AI scary, you either don’t understand the arguments on AI safety or you don’t know how advanced AI has gotten”. In my opinion, a few years ago we were concerned about recursively self improving AIs, and that seemed genuinely plausible and scary. But somehow, they didn’t really happen (or haven’t happened yet) despite people trying all sorts of ways to make it happen. And instead of a intelligence explosion, what we got was an extremely predictable improvement trend which was a function of only two things i.e. data + compute. This made me qualitatively update my p(doom) downwards, and I was genuinely surprised that many people went the other way instead, updating upwards as LLMs got better.
I’ve gotten push-back from almost everyone I’ve spoken with about this
I had also expected this reaction, and I always thought I was the only one who thinks we have basically achieved AGI since ~GPT-3. But looking at the upvotes on this post I wonder if this is a much more common view.
My first impression was also that axis lines are a matter of aesthetics. But then I browsed The Economist’s visual styleguide and realized they also do something similar, i.e. omit the y-axis line (in fact, they omit the y-axis line on basically all their line / scatter plots, but almost always maintain the gridlines).
Here’s also an article they ran about their errors in data visualization, albeit probably fairly introductory for the median LW reader.
I’m pretty sure you have come across this already, but just in case you haven’t:
Strong upvoted. I was a participant of AISC8 in the team that went on to launch AI Standards Lab, which I think counterfactually would not be launched if not for AISC.
More broadly speaking, my take is that models will continue to smash benchmarks faster than even the most optimistic expectations but we won’t see a intelligence explosion that is genuinely existentially threatening in a non-misuse way in the next ten years. Benchmarks will be increasingly disconnected from reality.