Except that Yudkowsky had actually made the predictions in public. However, he didn’t know in advance that the AIs would be trained as neural networks that are OOMs less efficient at keeping context[1] in mind. Other potential mispredictions are Yudkowsky’s cases for the possibility to greatly increase the capabilities starting from a human brain simulation[2] or to simulate a human brain working ~6 OOMs faster:
Yudkowsky’s case for a superfast human brain
T hefastest observed neurons fire 1000 times per second; the fastest axon fibers con duct signals at 150 meters/second, a half-millionth the speed of light; each synaptic op eration dissipates around 15,000 attojoules, which is more than a million times the ther modynamicminimumforirreversible computations at room temperature (kT300 ln(2) = 0003 attojoules per bit). It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for ev ery 31 physical seconds in the outside world, and a millennium would fly by in eight and a half hours. Vinge (1993) referred to such sped-up minds as “weak superhumanity”: a mind that thinks like a human but much faster.
However, as Turchin points out in his book[3] written in Russian, simulating a human brain requires[4] just 1e15 FLOP/second, or less than 1e22 FLOP/month.
Turchin’s argument in Russian
Для создания ИИ необходимо, как минимум, наличие достаточно мощного компьютера. Сейчас самые мощные компьютеры имеют мощность порядка 1 петафлопа (10 операций с плавающей запятой в секунду). По некоторым оценкам, этого достаточно для эмуляции человеческого мозга, а значит, ИИ тоже мог бы работать на такой платформе. Сейчас такие компьютеры доступны только очень крупным организациям на ограниченное время. Однако закон Мура предполагает, что мощность компьютеров возрастёт за 10 лет примерно в 100 раз, т. е., мощность настольного компьютера возрастёт до уровня терафлопа, и понадобится только 1000 настольных компьютеров, объединённых в кластер, чтобы набрать нужный 1 петафлоп. Цена такого агрегата составит около миллиона долларов в нынешних ценах – сумма, доступная даже небольшой организации. Для этого достаточно реализовать уже почти готовые наработки в области многоядерности (некоторые фирмы уже сейчас предлагают чипы с 1024 процессорами ) и уменьшения размеров кремниевых элементов.
To create AI, at the very least, a sufficiently powerful computer is required. Currently, the most powerful computers have a performance of about 1 petaflop (10¹⁵ floating-point operations per second). According to some estimates, this is enough to emulate the human brain, which means that AI could also run on such a platform. At present, such computers are available only to very large organizations for limited periods of time. However, Moore’s Law suggests that computer performance will increase roughly 100-fold over the next 10 years. That is, the performance of a desktop computer will reach the level of a teraflop, and only 1,000 desktop computers connected in a cluster would be needed to achieve the required 1 petaflop. The cost of such a system would be about one million dollars at today’s prices—a sum affordable even for a small organization. To achieve this, it is enough to implement the nearly completed developments in multicore technology (some companies are already offering chips with 1,024 processors) and in reducing the size of silicon elements.
A case against the existence of an architecture more efficient than a human brain is found in Jacob Cannel’s post. But it doesn’t exclude a human brain trained for millions of years.
IMO, there’s another major misprediction, and I’d argue that we don’t even need LLMs to make it a misprediction, and this is the prediction that within a few days/weeks/months we go from AI that was almost totally incapable of intellectual work to AI that can overpower humanity.
This comment also describes what I’m talking about:
(Yes, the Village Idiot to Einstein post also emphasized the vastness of the space above us, which is what Adam Scholl claimed and I basically agree with this claim, the issue is that there’s another claim that’s also being made).
The basic reason for this misprediction is as it turns out, human variability is pretty wide, and the fact that human brains are very similar is basically no evidence (I was being stupid about this in 2022):
And also, no domain has actually had a takeoff as fast as Eliezer Yudkowsky thought in either the Village Idiot to Einstein picture or his own predictions, but Ryan Greenblatt and David Matolcsi already made them, so I merely need to link them (1, 2, 3).
Also, a side note is that I disagree with Jacob Cannell’s post, and the reasons are that it’s not actually valid to compare brain FLOPs to computer FLOPs in the way Jacob Cannell does:
(Yes, I’m doing a lot of linking because other people have already done the work, I just want to share the work rather than redo things all over again).
@StanislavKrym I’m tagging you since I significantly edited the comment.
Except that Yudkowsky had actually made the predictions in public. However, he didn’t know in advance that the AIs would be trained as neural networks that are OOMs less efficient at keeping context[1] in mind. Other potential mispredictions are Yudkowsky’s cases for the possibility to greatly increase the capabilities starting from a human brain simulation[2] or to simulate a human brain working ~6 OOMs faster:
Yudkowsky’s case for a superfast human brain
T hefastest observed neurons fire 1000 times per second; the fastest axon fibers con duct signals at 150 meters/second, a half-millionth the speed of light; each synaptic op eration dissipates around 15,000 attojoules, which is more than a million times the ther modynamicminimumforirreversible computations at room temperature (kT300 ln(2) = 0003 attojoules per bit). It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for ev ery 31 physical seconds in the outside world, and a millennium would fly by in eight and a half hours. Vinge (1993) referred to such sped-up minds as “weak superhumanity”: a mind that thinks like a human but much faster.
However, as Turchin points out in his book[3] written in Russian, simulating a human brain requires[4] just 1e15 FLOP/second, or less than 1e22 FLOP/month.
Turchin’s argument in Russian
Для создания ИИ необходимо, как минимум, наличие достаточно мощного компьютера. Сейчас самые мощные компьютеры имеют мощность порядка 1 петафлопа (10 операций с плавающей запятой в секунду). По некоторым оценкам, этого достаточно для эмуляции человеческого мозга, а значит, ИИ тоже мог бы работать на такой платформе. Сейчас такие компьютеры доступны только очень крупным организациям на ограниченное время. Однако закон Мура предполагает, что мощность компьютеров возрастёт за 10 лет примерно в 100 раз, т. е., мощность настольного компьютера возрастёт до уровня терафлопа, и понадобится только 1000 настольных компьютеров, объединённых в кластер, чтобы набрать нужный 1 петафлоп. Цена такого агрегата составит около миллиона долларов в нынешних ценах – сумма, доступная даже небольшой организации. Для этого достаточно реализовать уже почти готовые наработки в области многоядерности (некоторые фирмы уже сейчас предлагают чипы с 1024 процессорами ) и уменьшения размеров кремниевых элементов.
ChatGPT’s translation into English
To create AI, at the very least, a sufficiently powerful computer is required. Currently, the most powerful computers have a performance of about 1 petaflop (10¹⁵ floating-point operations per second). According to some estimates, this is enough to emulate the human brain, which means that AI could also run on such a platform. At present, such computers are available only to very large organizations for limited periods of time. However, Moore’s Law suggests that computer performance will increase roughly 100-fold over the next 10 years. That is, the performance of a desktop computer will reach the level of a teraflop, and only 1,000 desktop computers connected in a cluster would be needed to achieve the required 1 petaflop. The cost of such a system would be about one million dollars at today’s prices—a sum affordable even for a small organization. To achieve this, it is enough to implement the nearly completed developments in multicore technology (some companies are already offering chips with 1,024 processors) and in reducing the size of silicon elements.
My take at the issues can be found in collapsible sections here and here.
A case against the existence of an architecture more efficient than a human brain is found in Jacob Cannel’s post. But it doesn’t exclude a human brain trained for millions of years.
Unfortunately, the book’s official translation into English has too low quality .
Fortunately, the simulation requires OOMs more dynamic memory.
IMO, there’s another major misprediction, and I’d argue that we don’t even need LLMs to make it a misprediction, and this is the prediction that within a few days/weeks/months we go from AI that was almost totally incapable of intellectual work to AI that can overpower humanity.
This comment also describes what I’m talking about:
How takeoff used to be viewed as occuring in days, weeks or months from being a cow to being able to place ringworlds around stars:
(Yes, the Village Idiot to Einstein post also emphasized the vastness of the space above us, which is what Adam Scholl claimed and I basically agree with this claim, the issue is that there’s another claim that’s also being made).
The basic reason for this misprediction is as it turns out, human variability is pretty wide, and the fact that human brains are very similar is basically no evidence (I was being stupid about this in 2022):
The range of human intelligence is wide, actually.
And also, no domain has actually had a takeoff as fast as Eliezer Yudkowsky thought in either the Village Idiot to Einstein picture or his own predictions, but Ryan Greenblatt and David Matolcsi already made them, so I merely need to link them (1, 2, 3).
Also, a side note is that I disagree with Jacob Cannell’s post, and the reasons are that it’s not actually valid to compare brain FLOPs to computer FLOPs in the way Jacob Cannell does:
Why it’s not valid to compare brain FLOPs to computer FLOPs in the way Jacob Cannell does, part 1
Why it’s not valid to compare brain FLOPs to computer FLOPs in the way Jacob Cannell does, part 2
I generally expect it to be 4 OOMs at least better, which cashes out to at least 3e19 FLOPs per Joule:
The limits of chip progress/physical compute in a small area assuming we are limited to irreversible computation
(Yes, I’m doing a lot of linking because other people have already done the work, I just want to share the work rather than redo things all over again).
@StanislavKrym I’m tagging you since I significantly edited the comment.