I’d be interested in anyone’s thoughts on when to use this vs e.g., METR’s time horizon. The latter is of course more coding-focused than this general-purpose compilation, but that might be a feature not a bug for our purposes (predicting takeoff).
Here’s one framing: getting a higher ECI score requires making progress on (multiple) benchmarks that other models find difficult. Making progress on METR instead involves being more consistently successful at longer coding tasks.
So ECI tracks general capabilities on a “difficulty-weighted” scale, and seems better suited to understanding the pace of progress in general, but it’s also an abstract number. There’s currently no mapping like “ECI of X == AGI”, or a human ECI baseline. On the other hand, METR’s benchmark has a nice concrete interpretation, but is more narrow.
We’re working on mapping ECI to more interpretable metrics (in fact, METR Time Horizons is one candidate), as well as allowing users to choose a subset of underlying benchmarks if they would prefer to weight ECI more heavily towards particular skills like coding.
Also note that we don’t currently include METR’s benchmarks as inputs to ECI, but we may add them in future iterations.
What is uniquely interesting/valuable about METR time horizons is that the score is meaningful and interpretable. Can do software tasks that would take an expert 2h with 50% success probability is very specific. Has the score y on benchmark x is only valuable for comparisons, it does not tell you what’s going to happen when the models reach score z.
I’d be interested in anyone’s thoughts on when to use this vs e.g., METR’s time horizon. The latter is of course more coding-focused than this general-purpose compilation, but that might be a feature not a bug for our purposes (predicting takeoff).
Here’s one framing: getting a higher ECI score requires making progress on (multiple) benchmarks that other models find difficult. Making progress on METR instead involves being more consistently successful at longer coding tasks.
So ECI tracks general capabilities on a “difficulty-weighted” scale, and seems better suited to understanding the pace of progress in general, but it’s also an abstract number. There’s currently no mapping like “ECI of X == AGI”, or a human ECI baseline. On the other hand, METR’s benchmark has a nice concrete interpretation, but is more narrow.
We’re working on mapping ECI to more interpretable metrics (in fact, METR Time Horizons is one candidate), as well as allowing users to choose a subset of underlying benchmarks if they would prefer to weight ECI more heavily towards particular skills like coding.
Also note that we don’t currently include METR’s benchmarks as inputs to ECI, but we may add them in future iterations.
What is uniquely interesting/valuable about METR time horizons is that the score is meaningful and interpretable. Can do software tasks that would take an expert 2h with 50% success probability is very specific. Has the score y on benchmark x is only valuable for comparisons, it does not tell you what’s going to happen when the models reach score z.