Re neuralese/online or continual learning or long-term memory that isn’t solely a context window breakthrough, I’m much more skeptical of it being very easy to integrate breakthroughs on short timelines, because it’s likely that changes will have to be made to the architecture that aren’t easy to do very quickly.
The potential for breakthroughs combined with the fact that Moore’s law will continue, making lots of compute cheap for researchers is a reason I think that my median timelines aren’t in the latter half of the century, but I think that it’s much more implausible to get it working very soon, so I’m much closer to 0.3% a year from 2025-2027.
@Mo Putera@the gears to ascension take the Moore’s law will continue point as a prediction that new paradigms like memristors will launch new S-curves of efficiency until we reach the Landauer Limit, which is 6.5 OOMs away, and that the current paradigm has 200x more efficiency savings to go:
Re neuralese/online or continual learning or long-term memory that isn’t solely a context window breakthrough, I’m much more skeptical of it being very easy to integrate breakthroughs on short timelines, because it’s likely that changes will have to be made to the architecture that aren’t easy to do very quickly.
The potential for breakthroughs combined with the fact that Moore’s law will continue, making lots of compute cheap for researchers is a reason I think that my median timelines aren’t in the latter half of the century, but I think that it’s much more implausible to get it working very soon, so I’m much closer to 0.3% a year from 2025-2027.
@Mo Putera @the gears to ascension take the Moore’s law will continue point as a prediction that new paradigms like memristors will launch new S-curves of efficiency until we reach the Landauer Limit, which is 6.5 OOMs away, and that the current paradigm has 200x more efficiency savings to go:
https://www.forethought.org/research/how-far-can-ai-progress-before-hitting-effective-physical-limits#chip-technology-progress