Moreover, this Semiotic–Simulation Theory has increased my credence in the absurd science-fiction tropes that the AI Alignment community has tended to reject, and thereby increased my credence in s-risks.
The potential consequences of this are harrowing—it feels strange how non-seriously this is being taken if there’s a conceivable path to s-risk here. Is there a reason for the alignment community seeming almost indifferent?
The potential consequences of this are harrowing—it feels strange how non-seriously this is being taken if there’s a conceivable path to s-risk here. Is there a reason for the alignment community seeming almost indifferent?