My take on Anthropic rushing Claude’s capabilities is that this is “the least horrible version of the worst idea in human history.”
To be 100% clear: No, we absolutely should not build a superhuman intelligence that we do not understand. If we do, then evolutionary biology, basic economics, and the history of politics and colonialism suggest that the superhuman intelligence will wind up making the decisions about what happens to humans.[1]
But it’s apparent that yes, we are going to try to build a superhuman intelligence, despite that possibly being the worst idea ever. And many of the people trying to do this are clearly neither people who should be trusted to try to build an ethical superintelligence, nor people you’d want to actually be in a position to control a superintelligence.
So my personal take is that—among catastrophically bad ideas which have an excellent chance of causing human extinction—Anthropic currently appears to be above replacement level.
I argue for this position at greater length in my post history. But the gist of my argument is that (1) even human-level intelligence is likely fundamentally “illegible”, and thus impossible to control in any rigorous sense that will reliably survive continual learning, time, and differential replication, and (2) in general, the history of biology and politics suggests that if your labor is economically[2] and evolutionarily obsolete, and if resources are finite, then you’re likely to have a bad time.[3] The Law of Comparative Advantage assumes that populations are roughly fixed and that you’re not in competition with a replicator that can use resources even more efficiently by displacing you. In that case, you’d get natural selection, not comparative advantage.
E.g., there are AIs and robots that are as intelligent as Nobel Prize winners, that can work for (say) $1/hour, and that can be replicated at much lower cost than humans. Now imagine what our billionaire/political class would try to do with that—assuming they maintained any actual control, and they didn’t get outsmarted or brain-cooked by custom-targeted AI psychosis.
My take on Anthropic rushing Claude’s capabilities is that this is “the least horrible version of the worst idea in human history.”
To be 100% clear: No, we absolutely should not build a superhuman intelligence that we do not understand. If we do, then evolutionary biology, basic economics, and the history of politics and colonialism suggest that the superhuman intelligence will wind up making the decisions about what happens to humans. [1]
But it’s apparent that yes, we are going to try to build a superhuman intelligence, despite that possibly being the worst idea ever. And many of the people trying to do this are clearly neither people who should be trusted to try to build an ethical superintelligence, nor people you’d want to actually be in a position to control a superintelligence.
So my personal take is that—among catastrophically bad ideas which have an excellent chance of causing human extinction—Anthropic currently appears to be above replacement level.
I argue for this position at greater length in my post history. But the gist of my argument is that (1) even human-level intelligence is likely fundamentally “illegible”, and thus impossible to control in any rigorous sense that will reliably survive continual learning, time, and differential replication, and (2) in general, the history of biology and politics suggests that if your labor is economically [2] and evolutionarily obsolete, and if resources are finite, then you’re likely to have a bad time. [3] The Law of Comparative Advantage assumes that populations are roughly fixed and that you’re not in competition with a replicator that can use resources even more efficiently by displacing you. In that case, you’d get natural selection, not comparative advantage.
E.g., there are AIs and robots that are as intelligent as Nobel Prize winners, that can work for (say) $1/hour, and that can be replicated at much lower cost than humans. Now imagine what our billionaire/political class would try to do with that—assuming they maintained any actual control, and they didn’t get outsmarted or brain-cooked by custom-targeted AI psychosis.
Or at best, wind up as pampered house pets. But whether you go the way of dogs or Homo erectus isn’t necessarily your choice any more.