Reinterpreting “AI and Compute”

Link post

Some ar­gu­ments say­ing that the re­cent ev­i­dence about the speed at which com­pute has been in­creas­ing and has been re­spon­si­ble for rapid progress in ma­chine learn­ing, might mean that we should be less wor­ried about short timelines, not more.

[...] Over­all, it seems pretty com­mon to in­ter­pret the OpenAI data as ev­i­dence that we should ex­pect ex­tremely ca­pa­ble sys­tems sooner than we oth­er­wise would.
How­ever, I think it’s im­por­tant to note that the data can also eas­ily be in­ter­preted in the op­po­site di­rec­tion. The op­po­site in­ter­pre­ta­tion goes like this:
1. If we were pre­vi­ously un­der­es­ti­mat­ing the rate at which com­put­ing power was in­creas­ing, this means we were over­es­ti­mat­ing the re­turns on it.
2. In ad­di­tion, if we were pre­vi­ously un­der­es­ti­mat­ing the rate at which com­put­ing power was in­creas­ing, this means that we were over­es­ti­mat­ing how sus­tain­able its growth is.
3. Let’s sup­pose, as the origi­nal post does, that in­creas­ing com­put­ing power is cur­rently one of the main drivers of progress in cre­at­ing more ca­pa­ble sys­tems. Then — bar­ring any ma­jor changes to the sta­tus quo — it seems like we should ex­pect progress to slow down pretty soon and we should ex­pect to be un­der­whelmed by how far along we are when the slow­down hits.