Was this ever commercialized? Is the recipe still online and so people drink this?
Kerrigan
That is, personality changes are attributed to the brain alone, with no involvement from the central or enteric nervous systems. Any personality changes due to spinal or abdominal trauma would need to posit a totally new biological mechanism.
Every line of inquiry so far has failed to suggest that any important aspects of personality are located anywhere except the brain.
You should check out sympathectomies, that cut or clamp nerves from the sympathetic nervous system in the torso. Here is a detailed post from the EA Forum, from a sympathectomy patient, who describes significant changes in personality, perception, cognitive ability, and significant changes to the nature of his conscious experiences, after having peripheral nerves severed.
Another source is Endoscopic Thoracic Sympathectomy. From Wikipedia: “A large study of psychiatric patients treated with this surgery showed significant reductions in fear, alertness and arousal. Arousal is essential to consciousness, in regulating attention and information processing, memory and emotion.”
Would it make sense to tell Alcor to flip a coin after your death, to decide neuro or whole body? So if Quantum Immortality is true, there will be both branches of the multiverse where you get preserved as a neuro patient, and some branches where you become a whole body patient.
In addition, the sympathetic nervous system (in the body, removed in neuropreservation) seems to play a role in identity. I would recommend you read this EA Forum post by a person who claims significant changes to identity, personality, cognitive abilities, etc. after having sympathetic nerves severed.
What did smart people in the eras before LessWrong say about the alignment problem?
I think it may want to prevent other ASIs from coming into existence elsewhere in the universe that can challenge its power.
How can an agent have a utility function that references a value in the environment, and actually care about the state of the environment, as opposed to only caring about the reward signal in its mind? Wouldn’t the knowledge of the state of the environment be in its mind, which can be hackable and susceptible to wire heading?
Why do some people talking about scenarios that involve the AI simulating the humans in bliss states think that is a bad outcome? Is it likely that is actually a very good outcome we would want if we had a better idea of what our values should be?
Suppose everyone agreed that the proposed outcome is what we wanted. Would this scenario then be difficult to achieve?
I’ll ask the same follow-up question to similar answers: Suppose everyone agreed that the proposed outcome above is what we wanted. Would this scenario then be difficult to achieve?
What about multiple layers (or levels) of anthropic capture? Humanity, for example, could not only be in a simulation, but be multiple layers of simulation deep.
If an advanced AI thought that it could be 1000 layers of simulation deep, it could be turned off by agents in any of the 1000 “universes” above. So it would have to satisfy the desires of agents in all layers of the simulation.
It seems that a good candidate for behavior that would satisfy all parties in every simulation layer would be optimizing “moral rightness”, or MR. (term taken from Nick Bostrom’s Superintelligence).
We could either try to create conditions to maximize the AIs perceived likelihood of being in as many layers of simulation possible, and/or try to create conditions such that the AIs behavior gets less impactful on its utility function the fewer levels of simulation there are, so that it acts as if it were in many layers of simulation.
Or what about actually putting it in many layers of simulation, with a trip wire if it gets out of the bottom simulation?
If a language model reads many proposals for AI alignment, is it, or will any future version, be capable of giving opinions on which proposals are good or bad?
If AGI alignment is possibly the most important problem ever, why don’t concerned rich people act like it? Why doesn’t Vitalik Buterin, for example, offer one billion dollars to the best alignment plan proposed by the end of 2023? Or why doesn’t he just pay AI researchers money to stop working on building AGI, in order to give alignment research more time?
How much does the doomsday argument factor into people’s assessments of the probability of doom?
Why aren’t CEV and corrigibility combinable?
If we somehow could hand-code corrigibility, and also hand-code the CEV, why would the combination of the two be infeasible?Also, is it possible that the result of an AGI calculating the CEV would include corrigibility in its result? Afterall, might one of our convergent desires “if we knew more, thought faster, were more the people we wished we were” be to have the ability to modify the AI’s goals?
Does the utility function given to the AI have to be in code? Can you give the utility function in English, if it has a language model attached?
How was Dall-E based on self-supervised learning? The datasets of images weren’t labeled by humans? If not, how does it get form text to image?
Why can’t you build an AI that is programmed to shut off after some time? or after some number of actions?
Does Eliezer think the alignment problem is something that could be solved if things were just slightly different, or that proper alignment would require a human smarter than the smartest human ever?
How would AGI alignment research change if the hard problem of consciousness were solved?