Thank you for taking the time to write/post this and run the related Workshop!
imho, we need more people to really think deeply about how these things could plausibly play out over the next few years or so. And, actually spend the time to share (at least their mainline expectations) as well!
So, this is me taking my own advice to spend the time to layout my “mainline expectations”.
This article was informed by my intuitions/expectations with regard to these recent quotes:
“2026… it remains true that existing code can now be much more easily attacked since all you need is an o6 or Claude subscription.” – @L Rudolf L “A History of the Future, 2025-2040″
“Our internal benchmark is around 50th (best [competitive] programmer in the world) and we’ll hit #1 by the end of the year [2026].” – Sam Altman, 2026
As, it’s not clear to me how millions of “PhD+ reasoning/coding AI agents” can sustainably exist in the same internet as the worlds existing software stacks, which are (currently) very vulnerable to being attacked and exploited by these advanced AI agents. Patching all the software on the internet does seem possible, but not before these PhD+ AI agents are released for public use?
This post is quite short/non-specific, but I think it has some useful predictions/”load bearing claims”, so very critique-able?
(I have admittedly haven’t read through many other “vignette” examples, yet (somewhat purposefully, as to not bias my own intuitions, too much.))
I encourage anyone to please do read, comment, critique, challenge, etc. Thank you!
Thank you for taking the time to write/post this and run the related Workshop!
imho, we need more people to really think deeply about how these things could plausibly play out over the next few years or so. And, actually spend the time to share (at least their mainline expectations) as well!
So, this is me taking my own advice to spend the time to layout my “mainline expectations”.
An Alternate History of the Future, 2025-2040
This article was informed by my intuitions/expectations with regard to these recent quotes:
“2026… it remains true that existing code can now be much more easily attacked since all you need is an o6 or Claude subscription.” – @L Rudolf L “A History of the Future, 2025-2040″
“Our internal benchmark is around 50th (best [competitive] programmer in the world) and we’ll hit #1 by the end of the year [2026].” – Sam Altman, 2026
As, it’s not clear to me how millions of “PhD+ reasoning/coding AI agents” can sustainably exist in the same internet as the worlds existing software stacks, which are (currently) very vulnerable to being attacked and exploited by these advanced AI agents. Patching all the software on the internet does seem possible, but not before these PhD+ AI agents are released for public use?
This post is quite short/non-specific, but I think it has some useful predictions/”load bearing claims”, so very critique-able?
(I have admittedly haven’t read through many other “vignette” examples, yet (somewhat purposefully, as to not bias my own intuitions, too much.))
I encourage anyone to please do read, comment, critique, challenge, etc. Thank you!