I’ll toss in some predictions of my own. I predict that all of the following things will not happen without a breakthrough substantially more significant than the invention of transformers:
AI inventing new things in science and technology, not via narrow training/design for a specific subtask (like e.g. AlphaFold) but roughly the way humans do it. (Confidence: 80%)
AI being routinely used by corporate executives to make strategic decisions, not as a glorified search engine but as a full-fledged advisor. (Confidence: 75%)
As above, but politicians instead of corporate executives. (Confidence: 72%)
AI learning how to drive using a human driving teacher, within a number of lessons similar to what humans take, without causing accidents (that the teacher fails to prevent) and without any additional driving training data or domain-specific design. (Confidence: 67%)
AI winning gold in IMO, using a math training corpus comparable in size to the number of math problems human contestants see in their lifetime. (Confidence: 65%)
AI playing superhuman Diplomacy, using a training corpus (including self-play) comparable in size to the number of games played by human players, while facing reputation incentives similar to those of human players. (Confidence: 60%)
As above, but Go instead of Diplomacy. (Confidence: 55%)
Nice!
I’ll toss in some predictions of my own. I predict that all of the following things will not happen without a breakthrough substantially more significant than the invention of transformers:
AI inventing new things in science and technology, not via narrow training/design for a specific subtask (like e.g. AlphaFold) but roughly the way humans do it. (Confidence: 80%)
AI being routinely used by corporate executives to make strategic decisions, not as a glorified search engine but as a full-fledged advisor. (Confidence: 75%)
As above, but politicians instead of corporate executives. (Confidence: 72%)
AI learning how to drive using a human driving teacher, within a number of lessons similar to what humans take, without causing accidents (that the teacher fails to prevent) and without any additional driving training data or domain-specific design. (Confidence: 67%)
AI winning gold in IMO, using a math training corpus comparable in size to the number of math problems human contestants see in their lifetime. (Confidence: 65%)
AI playing superhuman Diplomacy, using a training corpus (including self-play) comparable in size to the number of games played by human players, while facing reputation incentives similar to those of human players. (Confidence: 60%)
As above, but Go instead of Diplomacy. (Confidence: 55%)
Do you have any predictions about the first year when AI assistance will give a 2x/10x/100x factor “productivity boost” to AI research?