My comments on this topic have been poorly received. I think most people are pretty much immune to the emotional impact of AI hell as long as it isn’t affecting someone in their ‘monkeysphere’ (community of relationships capped by Dunbar’s number).
Arguably, if you’re concerned about s-risk, you should be theorizing about ways of controlling access to Em data. You would be interested in better digital rights management (DRM) technology, which is seen as ‘the enemy’ in a lot of tech/open-source adjacent communities, as well as developing technology for guaranteed secure deletion of human consciousness.
If it were possible to emulate a human and place them into AI hell, I am absolutely certain that the US government would find a way to use it for both interrogation and incarceration.
That sounds promising actually… It has become acceptable over the past decade to suggest that some things ought not to be open-sourced. Maybe it can become acceptable to argue for DRM for certain things too. Since we don’t yet have brain scanning technology, I’d also be interested in an inverse cryonics organization that has all the expertise to really really really make sure that your brain and maybe a lot of your social media activity and whatnot really gets destroyed after your death. (Perhaps even some sorts of mechanism by which suicide and complete scrambling is triggered automatically the second humanity loses control – but that seems infeasibly risky and hard to construct.)
To clarify, I don’t believe in identity, so this does not actually do much useful work directly, but it could find demand, and it push open the Overton window a bit to allow for more discussion of how we really want to protect em-relevant data at scale. It’s probably all too slow though.
For a suicide switch, a purpose built shaped charge mounted to the back of your skull (a properly engineered detonation wave would definitely pulp your brain, might even be able to do it without much danger to people nearby), raspberry pi with preinstalled ‘delete it all and detonate’ script on belt, secondary script that executes automatically if it loses contact with you for a set period of time.
That’s probably overengineered though, just request cremation with no scan, and make sure as much of your social life as possible is in encrypted chat. When you die, the passwords are gone.
When the tech gets closer and there are fears about wishes for cremation not being honored, EAs should pool their funds to buy a funeral home and provide honest services.
My comments on this topic have been poorly received. I think most people are pretty much immune to the emotional impact of AI hell as long as it isn’t affecting someone in their ‘monkeysphere’ (community of relationships capped by Dunbar’s number).
The popular LW answer seems to be the top comment from Robin Hanson to my post here: https://www.lesswrong.com/posts/BSo7PLHQhLWbobvet/unethical-human-behavior-incentivised-by-existence-of-agi
My other more recent comment: https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/?commentId=rWePAitP2syueDf25
Arguably, if you’re concerned about s-risk, you should be theorizing about ways of controlling access to Em data. You would be interested in better digital rights management (DRM) technology, which is seen as ‘the enemy’ in a lot of tech/open-source adjacent communities, as well as developing technology for guaranteed secure deletion of human consciousness.
If it were possible to emulate a human and place them into AI hell, I am absolutely certain that the US government would find a way to use it for both interrogation and incarceration.
That sounds promising actually… It has become acceptable over the past decade to suggest that some things ought not to be open-sourced. Maybe it can become acceptable to argue for DRM for certain things too. Since we don’t yet have brain scanning technology, I’d also be interested in an inverse cryonics organization that has all the expertise to really really really make sure that your brain and maybe a lot of your social media activity and whatnot really gets destroyed after your death. (Perhaps even some sorts of mechanism by which suicide and complete scrambling is triggered automatically the second humanity loses control – but that seems infeasibly risky and hard to construct.)
To clarify, I don’t believe in identity, so this does not actually do much useful work directly, but it could find demand, and it push open the Overton window a bit to allow for more discussion of how we really want to protect em-relevant data at scale. It’s probably all too slow though.
For a suicide switch, a purpose built shaped charge mounted to the back of your skull (a properly engineered detonation wave would definitely pulp your brain, might even be able to do it without much danger to people nearby), raspberry pi with preinstalled ‘delete it all and detonate’ script on belt, secondary script that executes automatically if it loses contact with you for a set period of time.
That’s probably overengineered though, just request cremation with no scan, and make sure as much of your social life as possible is in encrypted chat. When you die, the passwords are gone.
When the tech gets closer and there are fears about wishes for cremation not being honored, EAs should pool their funds to buy a funeral home and provide honest services.