Great calls for 2024, I’d say most are atleast partially accurate.
However, looking at 2026, you definitely underestimated the pace of txt2video development like myself. Given that Veo 2 can already make sequences with cuts and show the same subject across both clips, the 60s consistency will probably be reached in 2025. However above DALL-E 3′s quality, that has been surpassed now.
I’d say in late 2026 at the earliest or more realistically late 2027 because of compute constraints we’ll see a product that can generate coherent feature film length, optionally photorealistically.
As for humanoid robots I’d say they are market ready at a reasonable price in 2027 or 2028. It doesn’t make sense to look at cloud sota here because these robots will likely have to use edge compute due to privacy concerns when going mainstream in households and that in turn draws a lot of energy when running in real time across modalities. So multiple hardware issues to solve.
I’m also betting that there will be a Her-like product in the next 12 to 16 months that is indistinguishable from the movie version.
I agree, I definitely underestimated video. Before publishing, I had a friend review my predictions and they called out video as being too low, and I adjusted upward in response and still underestimated it.
I’d now agree with 2026 or 2027 for coherent feature film length video, though I’m not sure if it would be at feature film artistic quality (including plot). I also agree with Her-like products in the next year or two!
Personally I would still expect cloud compute to still be used for robotics, but only in ways where latency doesn’t matter (like a planning and reasoning system on top of a smaller local model, doing deeper analysis like “There’s a bag on the floor by the door. Ordinarily it should be put away, but given that it wasn’t there 5 minutes ago, it might be actively used right now, so I should leave it...”). I’m not sure the privacy concerns will trump convenience, like with phones.
I also now think virtual agents will start to become a big thing in 2025 and 2026, doing some kinds of remote work, or sizable chucks of existing jobs autonomously (while still not being able to automate most jobs end to end)!
2 days ago and I might already have to adjust the timelines.
Nvidia’s new Digits costs 3K and is the size of a mac mini. Two of them can supposedly run a 400B parameter language model which is crazy. So maybe the hardware issues aren’t as persistent for robotics.
And also Hailuo has a single-image reference mode now that works like a lora. It’s super consistent for faces, even if the rest is a bit quirky.
Great calls for 2024, I’d say most are atleast partially accurate.
However, looking at 2026, you definitely underestimated the pace of txt2video development like myself. Given that Veo 2 can already make sequences with cuts and show the same subject across both clips, the 60s consistency will probably be reached in 2025. However above DALL-E 3′s quality, that has been surpassed now.
I’d say in late 2026 at the earliest or more realistically late 2027 because of compute constraints we’ll see a product that can generate coherent feature film length, optionally photorealistically.
As for humanoid robots I’d say they are market ready at a reasonable price in 2027 or 2028. It doesn’t make sense to look at cloud sota here because these robots will likely have to use edge compute due to privacy concerns when going mainstream in households and that in turn draws a lot of energy when running in real time across modalities. So multiple hardware issues to solve.
I’m also betting that there will be a Her-like product in the next 12 to 16 months that is indistinguishable from the movie version.
I agree, I definitely underestimated video. Before publishing, I had a friend review my predictions and they called out video as being too low, and I adjusted upward in response and still underestimated it.
I’d now agree with 2026 or 2027 for coherent feature film length video, though I’m not sure if it would be at feature film artistic quality (including plot). I also agree with Her-like products in the next year or two!
Personally I would still expect cloud compute to still be used for robotics, but only in ways where latency doesn’t matter (like a planning and reasoning system on top of a smaller local model, doing deeper analysis like “There’s a bag on the floor by the door. Ordinarily it should be put away, but given that it wasn’t there 5 minutes ago, it might be actively used right now, so I should leave it...”). I’m not sure the privacy concerns will trump convenience, like with phones.
I also now think virtual agents will start to become a big thing in 2025 and 2026, doing some kinds of remote work, or sizable chucks of existing jobs autonomously (while still not being able to automate most jobs end to end)!
2 days ago and I might already have to adjust the timelines.
Nvidia’s new Digits costs 3K and is the size of a mac mini. Two of them can supposedly run a 400B parameter language model which is crazy. So maybe the hardware issues aren’t as persistent for robotics.
And also Hailuo has a single-image reference mode now that works like a lora. It’s super consistent for faces, even if the rest is a bit quirky.