So I’m kind of not very satisfied with this defence.
Not-very-charitably put, my impression now is that all the technical details in the forecast were free parameters fine-tuned to support the authors’ intuitions[1], when they weren’t outright ignored. Now, I also gather that those intuitions were themselves supported by playing around with said technical models, and there’s something to be said about doing the math, then burning the math and going with your gut. I’m not saying the forecast should be completely dismissed because of that.
… But “the authors, who are smart people with a good track record of making AI-related predictions, intuitively feel that this is sort of right, and they were able to come up with functions whose graphs fit those intuitions” is a completely different kind of evidence compared to “here’s a bunch of straightforward extrapolations of existing trends, with non-epsilon empirical support, that the competent authors intuitively think are going to continue”.
Like… I, personally, didn’t put much stock in the technical-analysis part to begin with[2], I only updated on the “these authors have these intuitions” part (to which I don’t give trivial weight!). But if I did interpret the forecast as being based on intuitively chosen but non-tampered straightforward extrapolations of existing trends, I think I would be pretty disappointed right now. You should’ve maybe put a “these graphs are for illustrative purposes only” footnote somewhere, like this one did.
I don’t feel that “this is the least-bad forecast that exists” is a good defence. Whether an analysis is technical or vibes-based is a spectrum, but it isn’t graded on a curve.
I’m kind of split about this critique, since the forecast did end up as good propaganda if nothing else. But I do now feel that the marketing around it was kind of misleading, and we probably care about maintaining good epistemics here or something.
If you’ve picked which function to fit, and it’s very sensitive to small parameter changes, and you pick the parameters that intuitively feel right, I think you might as well draw the graph by hand.
Because I don’t think AGI/researcher-level AIs have been reduced to an engineering problem, I think theoretical insights are missing, which means no straight-lines extrapolation is possible and we can’t do better than a memoryless exponential distribution. And whether this premise is true is itself an intuitive judgement call, and even fully rigorous technical analyses premised on an intuitive judgement call are only as rigorous as the intuitive judgement call.
I think the actual epistemic process that happened here is something like:
The AI 2027 authors had some high-level arguments that AI might be a very big deal soon
They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world
As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to
The right way to interpret the “timeline forecast” sections is not as “here is a simple extrapolation methodology that generated our whole worldview” but instead as a “here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth”
But like, at least for me, it’s clear to me that the beliefs about takeoff and the exact timelines, could not be, and obviously should not be, considered the result of a straightforward and simple extrapolation exercise. I think such an exercise would be pretty doomed, and a claim to objectivity in that space seems misguided. I think it’s plausible that some parts of the Timelines Forecast supplement ended up communicating too much objectivity here, but IDK, I think AI 2027 as a whole communicated this process pretty well, I think.
But like, at least for me, it’s clear to me that the beliefs about takeoff and the exact timelines, could not be, and obviously should not be, considered the result of a straightforward and simple extrapolation exercise
Counterpoint: the METR agency-horizon doubling trend. It has its issues, but I think “the point at which an AI could complete a year-long software-engineering/DL research project” is a reasonable cutoff point for “AI R&D is automated”, and it seems to be the kind of non-overly-fine-tuned model with non-epsilon empirical backing that I’m talking about, in a way AI 2027 graphs are not.
Or maybe the distinction isn’t as stark in others’ minds as in mine, I dunno.
As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do
Is it? See titotal’s six-stories section. If you’re choosing which function to fit, with a bunch of free parameters you set manually, it seems pretty trivial to come up with a “trend” that would fit any model you have.
Counterpoint: the METR agency-horizon doubling trend. It has its issues, but I think “the point at which an AI could complete a year-long software-engineering/DL research project” is a reasonable cutoff point for “AI R&D is automated”, and it seems to be the kind of non-overly-fine-tuned model with non-epsilon empirical backing that I’m talking about, in a way AI 2027 graphs are not.
I think the METR horizon doubling trend stuff doesn’t stand on its own, and it’s really not many datapoints.
I also really don’t think, without a huge number of assumptions, that “the point at which an AI could complete a year-long software-engineering/DL research project” is a good proxy for “AI R&D automation”, and indeed I want to avoid exactly that kind of sleight of hand. It only makes sense to someone who has a much more complicated worldview about how general AI is likely to be, how much the tasks METR measured are likely to generalize, and many other components. What it does make sense for is as a sanity-check on that broader worldview.
I think the METR horizon doubling trend stuff doesn’t stand on its own, and it’s really not many datapoints.
It’s less about the datapoints and more about the methodology.
I also really don’t think, without a huge number of assumptions, that “the point at which an AI could complete a year-long software-engineering/DL research project” is a good proxy for “AI R&D automation”
Fair, I very much agree. But my point here is that the METR benchmark works as some additional technical/empirical evidence towards some hypotheses over others, evidence that’s derived independently from one’s intuitions, in a way that more fine-tuned graphs don’t work.
Those two things sound extremely similar to me, I would appreciate some explanation/pointer to why they seem quite different.
Current guess: Is the idea that automation includes also a lot of (a) management, and (b) research taste in choosing projects, such that being able to complete a year-long project is only a lower-bound, not a central target?
Yeah, I mean, the task distribution is just hugely different. When METR measures software-developing tasks, they mean things in the reference class of well-specified tasks with tests basically already written.
As a concrete example, if you just use a random other distribution of tasks for horizon length as your base, like forecasting performance for unit of time, or writing per unit of time, or graphic design per unit of time, you get extremely drastically different time horizon curves.
This doesn’t make METR’s curves unreasonable as a basis, but you really need a lot of assumptions to get you from “these curves intersect one year here” to “the same year we will get ~fully automated AI R&D” (and indeed I would not currently believe the latter).
I don’t know the details of all of these task distributions, but clearly these are not remotely sampled uniformly from the set of all tasks necessary to automate AI R&D?
Yes, in particular the concern about benchmark tasks being well-specified remains. We’ll need both more data (probably collected from AI R&D tasks in the wild) and more modeling to get a forecast for overall speedup.
However, I do think if we have a wide enough distribution of tasks, AIs outperform humans on all of them at task lengths that should imply humans spend 1/10th the labor, but AI R&D has not been automated yet, something strange needs to be happening. So looking at different benchmarks is partial progress towards understanding the gap between long time horizons on METR’s task set and actual AI R&D uplift.
since the forecast did end up as good propaganda if nothing else
Just responding to this local comment you made: I think it’s wrong to make “propaganda” to reach end Y, even if you think end Y is important. If you have real reasons for believing something will happen, you shouldn’t have to lie, exaggerate, or otherwise mislead your audience to make them believe it, too.
So I’m arguing that you shouldn’t have mixed feelings because ~”it was valuable propaganda at least.” Again, not trying to claim that AI 2027 “lied”—just replying to the quoted bit of reasoning.
I phrased that badly/compressed too much. The background feeling there was that my critique may be of an overly nitpicky type that no normal person would care about, but the act-of-critiquing was still an attack on the report if viewed through the lens of a social-status game, which may (on the margins) unfairly bias someone against the report.
Like, by analogy, imagine a math paper involving a valid but hard-to-follow proof of some conjecture that for some reason gets tons of negative attention due to bad formatting. This may incorrectly taint the core message by association, even though it’s completely valid.
I’m kind of split about this critique, since the forecast did end up as good propaganda if nothing else. But I do now feel that the marketing around it was kind of misleading, and we probably care about maintaining good epistemics here or something.
I’m interested in you expanding on which parts of the marketing were misleading. Here are some quick more specific thoughts:
Overall AI 2027 comms
In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our “best guess”, “informed by trend extrapolations, wargames, …” Then in the “How did we write it?” box we basically just say it was written iteratively and informed by wargames and feedback. In “Why is it valuable?” we say “We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the US military to game out Taiwan scenarios.” I don’t think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.
In our initial tweet, Daniel said it was a “deeply researched” scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.
In various follow-up discussions, I think Scott and others sometimes pointed to the length of all of the supplementary research as justification for taking the scenario seriously. I still think this mostly holds up but again I think it could be interpreted in the wrong way.
Probably there has been similar discussion in various podcast appearances etc., but I haven’t listened to most of those and don’t remember how this sort of thing was presented in the ones I did listen to.
Timelines forecast specific comms
We do not say prominently explicitly in the timelines forecast that it relies on a bunch of non-obvious parameter choices rather than just empirical trend extrapolation, so I agree that people could come away with the wrong impression.
Plausibly we should have had / I should add a disclaimer saying something like this.
I have been frustrated with previous forecasts for not communicating this well, so plausibly I’m being hypocritical.
One reason I’m hesitant to add this is that I think it might update non-rationalists too much toward thinking it’s useless, when in fact I think it’s pretty informative. But this might be motivated reasoning toward the choice I made before. I might add a disclaimer.
I didn’t explicitly consider adding a prominent disclaimer previously; perhaps because I was typical minding and thinking it was obvious that any AGI timelines forecast will rely on intuitively estimated parameters.
However, I think that including 3 different people/groups’ forecasts very prominently does implicitly get across the idea that different parameter estimations can lead to very different results. This is especially true for including the FutureSearch aggregate, which has a within-model median of 2032 rather than 2027 or 2028.
There’s a graph at the top of the timelines forecast with all 3 of our distributions, and in my tweet thread about the timelines forecast this was in my top tweet.
As I’ve said, I agree that we messed up to some extent re: the time horizon prediction graph. I might write more about this in response to TurnTrout.
Not-very-charitably put, my impression now is that all the technical details in the forecast were free parameters fine-tuned to support the authors’ intuitions, when they weren’t outright ignored. Now, I also gather that those intuitions were themselves supported by playing around with said technical models, and there’s something to be said about doing the math, then burning the math and going with your gut. I’m not saying the forecast should be completely dismissed because of that.
I tried not to just fine-tune the parameters to support my existing beliefs, though I of course probably implicitly did to some extent. I agree that the level of free parameters is a reason to distrust our forecasts.
FWIW, my and Daniel’s timelines beliefs have both shifted some as a result of our modeling. Mine initially got shorter then got a bit longer due to the most recent update, Daniel moved his timelines longer to 2028 in significant part because of our timelines model.
… But “the authors, who are smart people with a good track record of making AI-related predictions, intuitively feel that this is sort of right, and they were able to come up with functions whose graphs fit those intuitions” is a completely different kind of evidence compared to “here’s a bunch of straightforward extrapolations of existing trends, with non-epsilon empirical support, that the competent authors intuitively think are going to continue”.
Mostly agree. I would say we have more than non-epsilon empirical support though because of METR’s time horizons work and RE-Bench. But I agree that there are a bunch of parameters estimated that don’t have much empirical support to rely on.
But if I did interpret the forecast as being based on intuitively chosen but non-tampered straightforward extrapolations of existing trends, I think I would be pretty disappointed right now.
I don’t agree with the connotation of “non-tampered,” but otherwise agree re: relying on straightforward extrapolations. I don’t think it’s feasible to only rely on straightforward extrapolations when predicting AGI timelines.
You should’ve maybe put a “these graphs are for illustrative purposes only” footnote somewhere, like this one did.
I think “illustrative purposes only” would be too strong. The graphs are the result of an actual model that I think is reasonable to give substantial weight to in one’s timelines estimates (if you’re only referring to the specific graph that I’ve apologized for, then I agree we should have moved more in that direction re: more clear labeling).
I don’t feel that “this is the least-bad forecast that exists” is a good defence. Whether an analysis is technical or vibes-based is a spectrum, but it isn’t graded on a curve.
I’m not sure exactly how to respond to this. I agree that the absolute level of usefulness of the timelines forecast also matters, and I probably think that our timelines model is more useful than you do. But also I think that the relative usefulness does matter quite a bit on the decision of whether to release and publicize model. I think maybe this critique is primarily coupled with your points about communication issues.
[Unlike the top-level comment, Daniel hasn’t endorsed this, this is just Eli.]
I’m interested in you expanding on which parts of the marketing were misleading
Mostly this part, I think:
In various follow-up discussions, I think Scott and others sometimes pointed to the length of all of the supplementary research as justification for taking the scenario seriously. I still think this mostly holds up but again I think it could be interpreted in the wrong way.
Like, yes, the supplementary materials definitely represent a huge amount of legitimate research that went into this. But the forecasts are “informed by” this research, rather than being directly derived from it, and the pointing-at kind of conveys the latter vibe.
I have been frustrated with previous forecasts for not communicating this well
Glad you get where I’m coming from; I wasn’t wholly sure how legitimate my complaints were.
One reason I’m hesitant to add [a disclaimer about non-obvious parameter choices] is that I think it might update non-rationalists too much toward thinking it’s useless, when in fact I think it’s pretty informative
I agree that this part is tricky, hence my being hesitant about fielding this critique at all. Persuasiveness isn’t something we should outright ignore, especially with something as high-profile as this. But also, the lack of such a disclaimer opens you up to takedowns such as titotal’s, and if one of those becomes high-profile (which it already might have?), that’d potentially hurt the persuasiveness more than a clear statement would have.
There’s presumably some sort of way to have your cake and eat it too here; to correctly communicate how the forecast was generated, but in terms that wouldn’t lead to it being dismissed by people at large.
I think “illustrative purposes only” would be too strong.
Yeah, sorry, I was being unnecessarily hyperbolic there.
So I’m kind of not very satisfied with this defence.
Not-very-charitably put, my impression now is that all the technical details in the forecast were free parameters fine-tuned to support the authors’ intuitions[1], when they weren’t outright ignored. Now, I also gather that those intuitions were themselves supported by playing around with said technical models, and there’s something to be said about doing the math, then burning the math and going with your gut. I’m not saying the forecast should be completely dismissed because of that.
… But “the authors, who are smart people with a good track record of making AI-related predictions, intuitively feel that this is sort of right, and they were able to come up with functions whose graphs fit those intuitions” is a completely different kind of evidence compared to “here’s a bunch of straightforward extrapolations of existing trends, with non-epsilon empirical support, that the competent authors intuitively think are going to continue”.
Like… I, personally, didn’t put much stock in the technical-analysis part to begin with[2], I only updated on the “these authors have these intuitions” part (to which I don’t give trivial weight!). But if I did interpret the forecast as being based on intuitively chosen but non-tampered straightforward extrapolations of existing trends, I think I would be pretty disappointed right now. You should’ve maybe put a “these graphs are for illustrative purposes only” footnote somewhere, like this one did.
I don’t feel that “this is the least-bad forecast that exists” is a good defence. Whether an analysis is technical or vibes-based is a spectrum, but it isn’t graded on a curve.
I’m kind of split about this critique, since the forecast did end up as good propaganda if nothing else. But I do now feel that the marketing around it was kind of misleading, and we probably care about maintaining good epistemics here or something.
If you’ve picked which function to fit, and it’s very sensitive to small parameter changes, and you pick the parameters that intuitively feel right, I think you might as well draw the graph by hand.
Because I don’t think AGI/researcher-level AIs have been reduced to an engineering problem, I think theoretical insights are missing, which means no straight-lines extrapolation is possible and we can’t do better than a memoryless exponential distribution. And whether this premise is true is itself an intuitive judgement call, and even fully rigorous technical analyses premised on an intuitive judgement call are only as rigorous as the intuitive judgement call.
I think the actual epistemic process that happened here is something like:
The AI 2027 authors had some high-level arguments that AI might be a very big deal soon
They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world
As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to
The right way to interpret the “timeline forecast” sections is not as “here is a simple extrapolation methodology that generated our whole worldview” but instead as a “here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth”
But like, at least for me, it’s clear to me that the beliefs about takeoff and the exact timelines, could not be, and obviously should not be, considered the result of a straightforward and simple extrapolation exercise. I think such an exercise would be pretty doomed, and a claim to objectivity in that space seems misguided. I think it’s plausible that some parts of the Timelines Forecast supplement ended up communicating too much objectivity here, but IDK, I think AI 2027 as a whole communicated this process pretty well, I think.
Counterpoint: the METR agency-horizon doubling trend. It has its issues, but I think “the point at which an AI could complete a year-long software-engineering/DL research project” is a reasonable cutoff point for “AI R&D is automated”, and it seems to be the kind of non-overly-fine-tuned model with non-epsilon empirical backing that I’m talking about, in a way AI 2027 graphs are not.
Or maybe the distinction isn’t as stark in others’ minds as in mine, I dunno.
Is it? See titotal’s six-stories section. If you’re choosing which function to fit, with a bunch of free parameters you set manually, it seems pretty trivial to come up with a “trend” that would fit any model you have.
I think the METR horizon doubling trend stuff doesn’t stand on its own, and it’s really not many datapoints.
I also really don’t think, without a huge number of assumptions, that “the point at which an AI could complete a year-long software-engineering/DL research project” is a good proxy for “AI R&D automation”, and indeed I want to avoid exactly that kind of sleight of hand. It only makes sense to someone who has a much more complicated worldview about how general AI is likely to be, how much the tasks METR measured are likely to generalize, and many other components. What it does make sense for is as a sanity-check on that broader worldview.
It’s less about the datapoints and more about the methodology.
Fair, I very much agree. But my point here is that the METR benchmark works as some additional technical/empirical evidence towards some hypotheses over others, evidence that’s derived independently from one’s intuitions, in a way that more fine-tuned graphs don’t work.
Those two things sound extremely similar to me, I would appreciate some explanation/pointer to why they seem quite different.
Current guess: Is the idea that automation includes also a lot of (a) management, and (b) research taste in choosing projects, such that being able to complete a year-long project is only a lower-bound, not a central target?
Yeah, I mean, the task distribution is just hugely different. When METR measures software-developing tasks, they mean things in the reference class of well-specified tasks with tests basically already written.
As a concrete example, if you just use a random other distribution of tasks for horizon length as your base, like forecasting performance for unit of time, or writing per unit of time, or graphic design per unit of time, you get extremely drastically different time horizon curves.
This doesn’t make METR’s curves unreasonable as a basis, but you really need a lot of assumptions to get you from “these curves intersect one year here” to “the same year we will get ~fully automated AI R&D” (and indeed I would not currently believe the latter).
Preliminary work showing that the METR trend is approximately average:
I don’t know the details of all of these task distributions, but clearly these are not remotely sampled uniformly from the set of all tasks necessary to automate AI R&D?
Yes, in particular the concern about benchmark tasks being well-specified remains. We’ll need both more data (probably collected from AI R&D tasks in the wild) and more modeling to get a forecast for overall speedup.
However, I do think if we have a wide enough distribution of tasks, AIs outperform humans on all of them at task lengths that should imply humans spend 1/10th the labor, but AI R&D has not been automated yet, something strange needs to be happening. So looking at different benchmarks is partial progress towards understanding the gap between long time horizons on METR’s task set and actual AI R&D uplift.
(agree, didn’t intend to imply that they were)
Just responding to this local comment you made: I think it’s wrong to make “propaganda” to reach end Y, even if you think end Y is important. If you have real reasons for believing something will happen, you shouldn’t have to lie, exaggerate, or otherwise mislead your audience to make them believe it, too.
So I’m arguing that you shouldn’t have mixed feelings because ~”it was valuable propaganda at least.” Again, not trying to claim that AI 2027 “lied”—just replying to the quoted bit of reasoning.
I phrased that badly/compressed too much. The background feeling there was that my critique may be of an overly nitpicky type that no normal person would care about, but the act-of-critiquing was still an attack on the report if viewed through the lens of a social-status game, which may (on the margins) unfairly bias someone against the report.
Like, by analogy, imagine a math paper involving a valid but hard-to-follow proof of some conjecture that for some reason gets tons of negative attention due to bad formatting. This may incorrectly taint the core message by association, even though it’s completely valid.
I’m interested in you expanding on which parts of the marketing were misleading. Here are some quick more specific thoughts:
Overall AI 2027 comms
In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our “best guess”, “informed by trend extrapolations, wargames, …” Then in the “How did we write it?” box we basically just say it was written iteratively and informed by wargames and feedback. In “Why is it valuable?” we say “We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the US military to game out Taiwan scenarios.” I don’t think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.
In our initial tweet, Daniel said it was a “deeply researched” scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.
In various follow-up discussions, I think Scott and others sometimes pointed to the length of all of the supplementary research as justification for taking the scenario seriously. I still think this mostly holds up but again I think it could be interpreted in the wrong way.
Probably there has been similar discussion in various podcast appearances etc., but I haven’t listened to most of those and don’t remember how this sort of thing was presented in the ones I did listen to.
Timelines forecast specific comms
We do not say prominently explicitly in the timelines forecast that it relies on a bunch of non-obvious parameter choices rather than just empirical trend extrapolation, so I agree that people could come away with the wrong impression.
Plausibly we should have had / I should add a disclaimer saying something like this.
I have been frustrated with previous forecasts for not communicating this well, so plausibly I’m being hypocritical.
One reason I’m hesitant to add this is that I think it might update non-rationalists too much toward thinking it’s useless, when in fact I think it’s pretty informative. But this might be motivated reasoning toward the choice I made before. I might add a disclaimer.
I didn’t explicitly consider adding a prominent disclaimer previously; perhaps because I was typical minding and thinking it was obvious that any AGI timelines forecast will rely on intuitively estimated parameters.
However, I think that including 3 different people/groups’ forecasts very prominently does implicitly get across the idea that different parameter estimations can lead to very different results. This is especially true for including the FutureSearch aggregate, which has a within-model median of 2032 rather than 2027 or 2028.
There’s a graph at the top of the timelines forecast with all 3 of our distributions, and in my tweet thread about the timelines forecast this was in my top tweet.
As I’ve said, I agree that we messed up to some extent re: the time horizon prediction graph. I might write more about this in response to TurnTrout.
I tried not to just fine-tune the parameters to support my existing beliefs, though I of course probably implicitly did to some extent. I agree that the level of free parameters is a reason to distrust our forecasts.
FWIW, my and Daniel’s timelines beliefs have both shifted some as a result of our modeling. Mine initially got shorter then got a bit longer due to the most recent update, Daniel moved his timelines longer to 2028 in significant part because of our timelines model.
Mostly agree. I would say we have more than non-epsilon empirical support though because of METR’s time horizons work and RE-Bench. But I agree that there are a bunch of parameters estimated that don’t have much empirical support to rely on.
I don’t agree with the connotation of “non-tampered,” but otherwise agree re: relying on straightforward extrapolations. I don’t think it’s feasible to only rely on straightforward extrapolations when predicting AGI timelines.
I think “illustrative purposes only” would be too strong. The graphs are the result of an actual model that I think is reasonable to give substantial weight to in one’s timelines estimates (if you’re only referring to the specific graph that I’ve apologized for, then I agree we should have moved more in that direction re: more clear labeling).
I’m not sure exactly how to respond to this. I agree that the absolute level of usefulness of the timelines forecast also matters, and I probably think that our timelines model is more useful than you do. But also I think that the relative usefulness does matter quite a bit on the decision of whether to release and publicize model. I think maybe this critique is primarily coupled with your points about communication issues.
[Unlike the top-level comment, Daniel hasn’t endorsed this, this is just Eli.]
Mostly this part, I think:
Like, yes, the supplementary materials definitely represent a huge amount of legitimate research that went into this. But the forecasts are “informed by” this research, rather than being directly derived from it, and the pointing-at kind of conveys the latter vibe.
Glad you get where I’m coming from; I wasn’t wholly sure how legitimate my complaints were.
I agree that this part is tricky, hence my being hesitant about fielding this critique at all. Persuasiveness isn’t something we should outright ignore, especially with something as high-profile as this. But also, the lack of such a disclaimer opens you up to takedowns such as titotal’s, and if one of those becomes high-profile (which it already might have?), that’d potentially hurt the persuasiveness more than a clear statement would have.
There’s presumably some sort of way to have your cake and eat it too here; to correctly communicate how the forecast was generated, but in terms that wouldn’t lead to it being dismissed by people at large.
Yeah, sorry, I was being unnecessarily hyperbolic there.