I agree, I definitely underestimated video. Before publishing, I had a friend review my predictions and they called out video as being too low, and I adjusted upward in response and still underestimated it.
I’d now agree with 2026 or 2027 for coherent feature film length video, though I’m not sure if it would be at feature film artistic quality (including plot). I also agree with Her-like products in the next year or two!
Personally I would still expect cloud compute to still be used for robotics, but only in ways where latency doesn’t matter (like a planning and reasoning system on top of a smaller local model, doing deeper analysis like “There’s a bag on the floor by the door. Ordinarily it should be put away, but given that it wasn’t there 5 minutes ago, it might be actively used right now, so I should leave it...”). I’m not sure the privacy concerns will trump convenience, like with phones.
I also now think virtual agents will start to become a big thing in 2025 and 2026, doing some kinds of remote work, or sizable chucks of existing jobs autonomously (while still not being able to automate most jobs end to end)!
2 days ago and I might already have to adjust the timelines.
Nvidia’s new Digits costs 3K and is the size of a mac mini. Two of them can supposedly run a 400B parameter language model which is crazy. So maybe the hardware issues aren’t as persistent for robotics.
And also Hailuo has a single-image reference mode now that works like a lora. It’s super consistent for faces, even if the rest is a bit quirky.
I agree, I definitely underestimated video. Before publishing, I had a friend review my predictions and they called out video as being too low, and I adjusted upward in response and still underestimated it.
I’d now agree with 2026 or 2027 for coherent feature film length video, though I’m not sure if it would be at feature film artistic quality (including plot). I also agree with Her-like products in the next year or two!
Personally I would still expect cloud compute to still be used for robotics, but only in ways where latency doesn’t matter (like a planning and reasoning system on top of a smaller local model, doing deeper analysis like “There’s a bag on the floor by the door. Ordinarily it should be put away, but given that it wasn’t there 5 minutes ago, it might be actively used right now, so I should leave it...”). I’m not sure the privacy concerns will trump convenience, like with phones.
I also now think virtual agents will start to become a big thing in 2025 and 2026, doing some kinds of remote work, or sizable chucks of existing jobs autonomously (while still not being able to automate most jobs end to end)!
2 days ago and I might already have to adjust the timelines.
Nvidia’s new Digits costs 3K and is the size of a mac mini. Two of them can supposedly run a 400B parameter language model which is crazy. So maybe the hardware issues aren’t as persistent for robotics.
And also Hailuo has a single-image reference mode now that works like a lora. It’s super consistent for faces, even if the rest is a bit quirky.