You are probably counting more properties things can vary under as “ontological”. I’m mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I’m basing the moral status largely on “well realized”, “complex” and “technically sentient” here. You’ll notice all my example ALSO has the actual utility function multiplier at “unknown”.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
You are probably counting more properties things can vary under as “ontological”. I’m mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I’m basing the moral status largely on “well realized”, “complex” and “technically sentient” here. You’ll notice all my example ALSO has the actual utility function multiplier at “unknown”.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Ah. I see what you mean. That makes sense.