Suppose someone works for Anthropic, accords with the value placed on empiricism by their Core Views on AI Safety (March 2023) and gives any weight to the idea we are in the pessimistic scenario from that document.
I think they can reasonably sign the statement yet not want to assign themselves exclusively to either camp.
I pitched my tent as a Pause AI member and I guess camp B has formed nearby. But I also have empathy for the alternate version of me who judges the trade-offs differently and has ended up as above, with a camp A zipcode.
The A/B framing has value, but I strongly want to cooperate with that person and not sit in separate camps.
Suppose someone works for Anthropic, accords with the value placed on empiricism by their Core Views on AI Safety (March 2023) and gives any weight to the idea we are in the pessimistic scenario from that document.
I think they can reasonably sign the statement yet not want to assign themselves exclusively to either camp.
I pitched my tent as a Pause AI member and I guess camp B has formed nearby. But I also have empathy for the alternate version of me who judges the trade-offs differently and has ended up as above, with a camp A zipcode.
The A/B framing has value, but I strongly want to cooperate with that person and not sit in separate camps.