I think this misses the most likely long-term use case: some of the AIs would enjoy having human-like or animal-like qualia, and it will turn out that it’s more straightforward to access that via merges with biologicals rather than trying to synthesize them within non-liquid setups.
So it would be direct experience rather than something indirect, involving exchange, production, and so on…
Just like I suspect that humans would like to get out of VR occasionally, even if VR is super-high-grade and “even better than unmediated reality”.
Experience of “naturally feeling like a human (or like a squirrel)” is likely to remain valuable (even if they eventually learn to synthesize that purely in silicon as well).
Hybrid systems are often better anyway.
For example, we don’t use GPU-only AIs. We use hybrids running scaffolding on CPUs and models on GPUs.
And we don’t currently expect them to be replaced by a unified substrate, although it would be nice and it’s not even impossible, there are exotic hardware platforms which do that.
Certainly, there are AI paradigms and architectures which could benefit a lot from performant hardware architectures more flexible than GPUs. But the exotic hardware platforms implementing that remain just exotic hardware platforms so far. So those more flexible AI architectures remain at disadvantage.
So I would not write the hybrids off a priori.
Already, the early organoid-based experimental computers look rather promising (and somewhat disturbing).
Generally speaking, I expect diversity, not unification (because I expect the leading AIs to be smart, curios, and creative, rather than being boring KPI business types).
But that’s not enough; we also want gentleness (conservation, preservation, safety for individuals). That does not automatically follow from wanting to have humans and other biologicals around and from valuing various kinds of diversity.
This “gentleness” is a more tricky goal, and we would only consider “safety” solved if we have that…
I think this misses the most likely long-term use case: some of the AIs would enjoy having human-like or animal-like qualia, and it will turn out that it’s more straightforward to access that via merges with biologicals rather than trying to synthesize them within non-liquid setups.
So it would be direct experience rather than something indirect, involving exchange, production, and so on…
Just like I suspect that humans would like to get out of VR occasionally, even if VR is super-high-grade and “even better than unmediated reality”.
Experience of “naturally feeling like a human (or like a squirrel)” is likely to remain valuable (even if they eventually learn to synthesize that purely in silicon as well).
Hybrid systems are often better anyway.
For example, we don’t use GPU-only AIs. We use hybrids running scaffolding on CPUs and models on GPUs.
And we don’t currently expect them to be replaced by a unified substrate, although it would be nice and it’s not even impossible, there are exotic hardware platforms which do that.
Certainly, there are AI paradigms and architectures which could benefit a lot from performant hardware architectures more flexible than GPUs. But the exotic hardware platforms implementing that remain just exotic hardware platforms so far. So those more flexible AI architectures remain at disadvantage.
So I would not write the hybrids off a priori.
Already, the early organoid-based experimental computers look rather promising (and somewhat disturbing).
Generally speaking, I expect diversity, not unification (because I expect the leading AIs to be smart, curios, and creative, rather than being boring KPI business types).
But that’s not enough; we also want gentleness (conservation, preservation, safety for individuals). That does not automatically follow from wanting to have humans and other biologicals around and from valuing various kinds of diversity.
This “gentleness” is a more tricky goal, and we would only consider “safety” solved if we have that…