Good to see your point of view. The old arguments about AI doom are not convincing to me anymore, however getting alignment 100% right, whatever that means in no way guarantees a positive Singularity.
Should we be talking about concrete plans about that now? For example I believe with a slow takeoff if we don’t get Neuralink or mind uploading, then our P(doom) → 1 as the Super AI gets ever more ahead of us. The kind scenarios I can see
“dogs in a war zone” great powers make ever more powerful AI and use them as weapons. We don’t understand our environment and it isn’t safe. The number of humans steadily drops to zero.
Some kind of Moloch hell, without explicit shooting. Algorithms run our world, we don’t understand it anymore and they bring out the worst in us. We keep making more sentient AI, we are greatly outnumbered by them until no more.
WALL-E type scenario—basic needs met, digital narcotics etc we lose all ambitions.
I can’t see a good one as ASI gets way further ahead of us. With a slow takeoff there is no sovereign to help with our CEV, pivotal acts are not possible etc.
I personally support some kind of hardware pause—when Moores law runs out at 1-2nm don’t make custom AI chips to overcome the von Neumann bottleneck, in combination with accelerating hard Neural interface, WBE/mind uploading. Doomer types seem also to back something similar.
I don’t see the benefit of arguing with the conventional 2010′s era alignment ideas anymore—only data will change people’s minds now. Like if you believe in a fast takeoff, nothing short of having IQ 180 AI+/weak superintelligence saying that “I can’t optimize myself further unless you build me some new hardware” would make a difference I can see.
Good to see your point of view. The old arguments about AI doom are not convincing to me anymore, however getting alignment 100% right, whatever that means in no way guarantees a positive Singularity.
Should we be talking about concrete plans about that now? For example I believe with a slow takeoff if we don’t get Neuralink or mind uploading, then our P(doom) → 1 as the Super AI gets ever more ahead of us. The kind scenarios I can see
“dogs in a war zone” great powers make ever more powerful AI and use them as weapons. We don’t understand our environment and it isn’t safe. The number of humans steadily drops to zero.
Some kind of Moloch hell, without explicit shooting. Algorithms run our world, we don’t understand it anymore and they bring out the worst in us. We keep making more sentient AI, we are greatly outnumbered by them until no more.
WALL-E type scenario—basic needs met, digital narcotics etc we lose all ambitions.
I can’t see a good one as ASI gets way further ahead of us. With a slow takeoff there is no sovereign to help with our CEV, pivotal acts are not possible etc.
I personally support some kind of hardware pause—when Moores law runs out at 1-2nm don’t make custom AI chips to overcome the von Neumann bottleneck, in combination with accelerating hard Neural interface, WBE/mind uploading. Doomer types seem also to back something similar.
I don’t see the benefit of arguing with the conventional 2010′s era alignment ideas anymore—only data will change people’s minds now. Like if you believe in a fast takeoff, nothing short of having IQ 180 AI+/weak superintelligence saying that “I can’t optimize myself further unless you build me some new hardware” would make a difference I can see.