Feels like more shades of error vs conflict theory. It doesn’t matter if the ceos of ai companies are making mistakes if they are selected for being the sort of person who refuses to evaluate certain shapes of argument.
Nod, though I expect there’s value in various flavors of politician / public figures understanding these concepts who can put pressure on the CEOs.
(I dunno that that’s the top theory-of-change I’d be spending my effort on, but, if my assumption is right that this is more like “getting marginally more utils out of low-effort-side-project-time”, that’s not exactly a crux)
Feels like more shades of error vs conflict theory. It doesn’t matter if the ceos of ai companies are making mistakes if they are selected for being the sort of person who refuses to evaluate certain shapes of argument.
Nod, though I expect there’s value in various flavors of politician / public figures understanding these concepts who can put pressure on the CEOs.
(I dunno that that’s the top theory-of-change I’d be spending my effort on, but, if my assumption is right that this is more like “getting marginally more utils out of low-effort-side-project-time”, that’s not exactly a crux)