Still better than where we seem to be headed.
andrew sauer
If anything, it seems like higher-dimensional cubes are spiky, not spheres. At the vertex of a square, the figure takes up 1⁄4 of the local area around the vertex, for a cube vertex it’s only 1⁄8 of the space, for a tesseract only 1⁄16, in 10 dimensions only 1/1024 etc.
Personally, I did my share of torturing minecraft villagers in creative mode, so maybe I’m projecting to an extent, except that there absolutely are lots more people like me who can’t be trusted with anywhere near that level of power over actual people.
Morality is scary......
Damn straight. People need to understand the implications of this shit. “Oh let’s hope the separate caste which controls the entire universe and which we can’t hope to contest in any possible way is nice to us!!!”
Open. A. History book.
Your scenario is relatively low on the awfulness scale, even.
We look down on peasants for burning cats today, but the tragic irony is that their society was far better overall on animal welfare than ours in the modern day, though for practical reasons rather than moral ones.
I also like math and computer science and got a degree but unfortunately couldn’t find any careers there 🙁
This is an embarrassingly large part of the reason I was considered good at school. I did generally learn the material though, but forgot much of it afterward :(
More like Lovecraftian space opera a la 40k. The individual eukaryote has no real control over their fate, millions can live or die for purposes far beyond their understanding yet trivial in the grand scheme of things. For the individual cell it isn’t meaningfully better now than it ever has been. Also I kind of sadly LOL @ “never feeling the threat of drought or lack of ATP”.
You’re also tying it to your very specific ideas of what is virtuous. You point out yourself that most people do not share your attitude to the suffering of lesser creatures. If they did, it would not be necessary to persuade them to. Personally, I’m quite lackadaisical about animal suffering, but then who decides? Someone whose idea of supreme virtue was the creation of great art might suppose that we must build ASI to be appreciative of great art, that it may spare us.
You’re acting as though the attitude towards the suffering of lesser creatures is a completely arbitrary and random selection which can be replaced by any other consideration with my argument unchanged, and therefore I prove too much.
But if AI takes over, then WE are the lesser creatures, so we should perhaps be expected to be treated however the AI thinks lesser creatures should be treated. There is no similar reason to worry quite that much about if the AI values art or enlightenment or whatever.
The fundamental problem is to make something whose good graces we are not dependent on at all.
If it has godlike power, then that is just impossible. Then we are utterly dependent on what it wants for us.
In your final paragraph you pray for the AI God to exterminate us all for being unworthy of it.
I think that’s an false characterization. I’m saying “because if it doesn’t do that, I expect it to do much, MUCH worse.” It’s not about justice or revenge for any sins. I don’t believe in retributive justice at all.
If you insist on putting it in religious terms, it’s more like I hope God doesn’t care about us at all and just destroys us out of apathy rather than any sort of moral judgement, because if a few of us unworthy people create God to fit their desires, I expect the outcome to be worse than that.
Do we actually disagree? You’re saying being virtuous isn’t enough, you also need to solve an extremely difficult implementation problem, which I agree with.
I’m saying the extremely difficult implementation problem isn’t enough, we also need to be virtuous.
By the symmetry of logical AND, isn’t that equivalent?
The other thing I’m saying is that, if we are to fail by solving one of these problems and not the other, I’d far rather it’s not just technical alignment we manage: the results are worse than paperclips.
I don’t believe that “this high standard”, or any other, is even relevant to what the God we might create would be like. If anyone builds it, everyone dies, to coin a phrase.
If we don’t solve alignment, I agree. You just get some random optimizer.
“Building it right” is what I’m concerned about in this post. By whose lights is it built right? Which is why I honestly sort of hope you’re right about the mathematical impossibility.
Perhaps the whole idea of “don’t create more just to torture them” isn’t enough to keep us around, but that would mean generalized veganism in my sense is not sufficient, but it is still necessary.
I’m not talking about an acausal deal or something where the AI judges our moral system and treats us accordingly. I mean that the AI is aligned to the moral system of its powerful masters, which I think will see too little problem with tormenting us for much the same reason most people see too little problem with tormenting animals: no respect for sentients not powerful enough to enter the social contract.
Also in the limit of extreme computational power and simulation capacity, I would also start worrying about the proverbial “video game characters” of the future, too. Which is why the veganism needs to be generalized, it’s not just about animals, or even just about powerless humans or even only beings that exist right now: post-singularity you’ll be able to just tailor-make more beings for only God knows what purpose.
How about calling it a “low minimum viable criteria” then? Maybe it’s just me, but I approach risk from the worst-case up: so I’m focused first on making sure we aren’t endlessly tortured by the whims of some psychopathic power, and then after that on making sure we don’t go extinct.
Such an ASI would be aware the Homo sapiens are omnivores, not innately vegan, and would not expect us to act with as much beneficence towards all other humans as it does, let alone act that way to all other animals.
Sure. My point isn’t that the AI will read this post and “judge” us based on how “good” we are. It’s that, at the end of the day it’s us creating the AI, and if we solve technical alignment we infuse our values, or more accurately, the values of whoever has the power to influence the process.
So it’s not that an AI couldn’t decide “I’m just going to do the best for humans, and all human-level entities, irregardless of how they treat each other or lower life-forms”. Anything is possible. The question is: What is likely to actually get built if the powerful get their hands on such a technology?
It’s not that “us being super good magically makes God good”. It’s that that’s what it would take for it to seem feasible to me that us summoning God would have a good outcome, even if we solve all the technical problems of alignment everyone here is trying to solve.
I don’t trust humanity to create God, and this high standard is roughly what it would take for me to consider giving that trust. Do you disagree with that?
Veganism is Necessary
Better that way. I wouldn’t want to give every person the ability to create a real person from spec, would you?
...Programming?