I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.
Previously, I was a Philosophy Fellow at the Center for AI Safety.
So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.
You can email me at elliott.thornley@philosophy.ox.ac.uk.
Nice post! I share your meta-ethical stance, but I don’t think you should call it ‘moral quasi-realism’. ‘Quasi-realism’ already names a position in meta-ethics, and it’s different to the position you describe.
Very roughly, quasi-realism agrees with anti-realism in stating:
But, in contrast to anti-realism, quasi-realism also states:
The conjunction of (1)-(3) defines quasi-realism.
What you call ‘quasi-realism’ might be compatible with (2) and (3), but its defining features seem to be (1) plus something like:
(1) plus (4) could point you towards two different positions in meta-ethics. It depends whether you think it’s appropriate to describe the principles we’d embrace if we were more thoughtful, etc., as true.
If you think it is appropriate to describe these principles as true, then that counts as an ideal observer theory.
If you think it isn’t appropriate to describe these principles as true, then your position is just anti-realism plus the claim that you do in fact try to abide by the principles that you’d embrace if you were more thoughtful, etc.