You caught me… I tend to make overly generalized statements. I am working on being more concise with my language, but my enthusiasm still gets the best of me too often.
You make a good point, but I don’t necessarily see the requirement of massive infrastructures and political will as the primary barriers for achieving such goals. As I see it, any idea, no matter how grand/costly, is achievable so long as a kernel exists at the core of that idea that promises something “priceless”, either spritually, intellectually, materially, etc. For example, a “planet cracking nuke” can only have one outcome, the absolute end to our world. There is no possible scenario imaginable where cracking the planet apart would benefit any group or individual. (Potentially, in the future, there could be benefits to cracking apart a planet that we did not actually live on, but in the context of the here and now, a planet cracking nuke holds no kernel, no promise of something priceless.
AI fascinates because, no matter how many horrorific outcomes the human mind can conceive of, there is an unshakable sense that AI also holds the key to unlocking answers to questions humanity has sought from the beginning of thought itself. That is a rather large kernel and it is never going to go dim, despite the very real OR the absurdly unlikely risks involved.
So, it is this kernel of priceless return at the core of an “agent AI” that, for me, makes its eventual creation a certainty along a long enough timeline, not a likelihood ratio.
You caught me… I tend to make overly generalized statements. I am working on being more concise with my language, but my enthusiasm still gets the best of me too often.
You make a good point, but I don’t necessarily see the requirement of massive infrastructures and political will as the primary barriers for achieving such goals. As I see it, any idea, no matter how grand/costly, is achievable so long as a kernel exists at the core of that idea that promises something “priceless”, either spritually, intellectually, materially, etc. For example, a “planet cracking nuke” can only have one outcome, the absolute end to our world. There is no possible scenario imaginable where cracking the planet apart would benefit any group or individual. (Potentially, in the future, there could be benefits to cracking apart a planet that we did not actually live on, but in the context of the here and now, a planet cracking nuke holds no kernel, no promise of something priceless.
AI fascinates because, no matter how many horrorific outcomes the human mind can conceive of, there is an unshakable sense that AI also holds the key to unlocking answers to questions humanity has sought from the beginning of thought itself. That is a rather large kernel and it is never going to go dim, despite the very real OR the absurdly unlikely risks involved.
So, it is this kernel of priceless return at the core of an “agent AI” that, for me, makes its eventual creation a certainty along a long enough timeline, not a likelihood ratio.