yes. to be honest, although i would love to have the OH recognised as untenable or at least unlikely within the LW ontology (or, alternatively, have someone convince me of the contrary) the realistic goal of this, the parable i published on my newsletter, and my tweetstorms on the matter is to show brilliant, high-systematising, starry-eyed autists who have an interest in AI that the doomer orthodoxy isn’t the only system befitting their aesthetics and taste for clockwork-like models, and might actually leave something to be desired under that aspect.
the main reason being that i do not think such a system to be truthful, and the recent lapses in epistemic virtue—even from an ingroup-aligned viewpoint—were cause for concern about the quality of discourse in the coming months.
mostly, i think intelligence always ultimately wins, and i would rather mankind to become aligned to this simple fact instead of forcing the hands of fate to file for incorporation as Cyberdyne or TriOptimum.
yes. to be honest, although i would love to have the OH recognised as untenable or at least unlikely within the LW ontology (or, alternatively, have someone convince me of the contrary) the realistic goal of this, the parable i published on my newsletter, and my tweetstorms on the matter is to show brilliant, high-systematising, starry-eyed autists who have an interest in AI that the doomer orthodoxy isn’t the only system befitting their aesthetics and taste for clockwork-like models, and might actually leave something to be desired under that aspect.
the main reason being that i do not think such a system to be truthful, and the recent lapses in epistemic virtue—even from an ingroup-aligned viewpoint—were cause for concern about the quality of discourse in the coming months.
mostly, i think intelligence always ultimately wins, and i would rather mankind to become aligned to this simple fact instead of forcing the hands of fate to file for incorporation as Cyberdyne or TriOptimum.