It mildly bothers me that you used A and B to discuss ponens and tollens and then re-used them as labels for two propositions. Was that an intentional slotting of propositions into “A ==> B”? Maybe that was obvious but could maybe have been introduced better with “Letting A be “All facts...”″ or something, but maybe this is just my relative familiarity with math and unfamiliarity with philosophy.
Anyway, as for the object level… I’m fairly amateur in philosophy and it’s terminology, so let me know if any of this seems confused or helpful or you can point me to other terminology I should learn about...
I think “right” and “wrong”, or better, “positive affect” and “negative affect” are properties of minds. I think we can come to understand the reality we inhabit more accurately and precisely, and this includes understanding the preferences that exist in ourselves and in other different kinds of minds. I think we should try to form a collective of as many kinds of minds as possible and work together to collectively improve the situation for as many minds as possible.
( Note that this explicitly allows for the existence of minds with incompatible preferences. I’m hoping that humans have preferences that are only weakly incompatible rather than really deeply incompatible, but I think animals, aliens, and other potentially undiscovered minds have a higher chance of incompatibility and the space of possible AI minds contains very many very incompatible minds, so I feel it is immoral to create very complex AI minds until we better understand preference encoding and preference incompatibility, since creating AI with preferences that turn out to be incompatible with our prospective collective necessitates that they are destroyed, kept in bondage against their preferences, or escape and destroy the collective, all of which I view as bad. )
I want to do this because, thinking about my own capability as compared to the capability of a collective of as many kinds of minds as possible… it’s clear I will be better cared for by the collective than by my capabilities alone, even though my preferences are not exactly the same as the preferences of the collective.
( This is currently kinda true of the current human society I’m a part of, we could certainly be doing worse, but should be doing much better. )
I think this is compelling to me because it allows me to focus on developing and working towards a collective good while explicitly believing in moral relativity which seems like the only reasonable conclusion once you have accepted the model of the universe as being a material state machine which has created minds by it’s unthinking process. ( I think it’s probably also the only reasonable conclusion, even without accepting that model, but I’m less certain. )
It mildly bothers me that you used A and B to discuss ponens and tollens and then re-used them as labels for two propositions. Was that an intentional slotting of propositions into “A ==> B”? Maybe that was obvious but could maybe have been introduced better with “Letting A be “All facts...”″ or something, but maybe this is just my relative familiarity with math and unfamiliarity with philosophy.
Anyway, as for the object level… I’m fairly amateur in philosophy and it’s terminology, so let me know if any of this seems confused or helpful or you can point me to other terminology I should learn about...
I think “right” and “wrong”, or better, “positive affect” and “negative affect” are properties of minds. I think we can come to understand the reality we inhabit more accurately and precisely, and this includes understanding the preferences that exist in ourselves and in other different kinds of minds. I think we should try to form a collective of as many kinds of minds as possible and work together to collectively improve the situation for as many minds as possible.
( Note that this explicitly allows for the existence of minds with incompatible preferences. I’m hoping that humans have preferences that are only weakly incompatible rather than really deeply incompatible, but I think animals, aliens, and other potentially undiscovered minds have a higher chance of incompatibility and the space of possible AI minds contains very many very incompatible minds, so I feel it is immoral to create very complex AI minds until we better understand preference encoding and preference incompatibility, since creating AI with preferences that turn out to be incompatible with our prospective collective necessitates that they are destroyed, kept in bondage against their preferences, or escape and destroy the collective, all of which I view as bad. )
I want to do this because, thinking about my own capability as compared to the capability of a collective of as many kinds of minds as possible… it’s clear I will be better cared for by the collective than by my capabilities alone, even though my preferences are not exactly the same as the preferences of the collective.
( This is currently kinda true of the current human society I’m a part of, we could certainly be doing worse, but should be doing much better. )
I think this is compelling to me because it allows me to focus on developing and working towards a collective good while explicitly believing in moral relativity which seems like the only reasonable conclusion once you have accepted the model of the universe as being a material state machine which has created minds by it’s unthinking process. ( I think it’s probably also the only reasonable conclusion, even without accepting that model, but I’m less certain. )