Or should we expect that a new architecture that’s sufficiently far away from the DL paradigm would actually need some new type of hardware?
My expectation is that it’d be possible to translate any such architecture into a format that would efficiently run on GPUs/TPUs with some additional work, even if its initial definition would be e. g. neurosymbolic.
Though I do think it’s an additional step that the researchers would need to think of and execute, which might delay the doom for years (if it’s too inefficient in its initial representation).
My expectation is that it’d be possible to translate any such architecture into a format that would efficiently run on GPUs/TPUs with some additional work, even if its initial definition would be e. g. neurosymbolic.
Though I do think it’s an additional step that the researchers would need to think of and execute, which might delay the doom for years (if it’s too inefficient in its initial representation).