Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.
Unvarnished critical (but constructive) feedback is welcome.
[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]
Thinks longtermism rests on a false premise – some sort of total impartiality.
Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet “luddite” so long as this is understood to describe someone who:
suspects that on net, technological progress yields diminishing returns in human flourishing.
OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing). Fighting to uphold higher working standards is to be on the front lines fighting against Moloch (see e.g. Fleming’s vanishing economy dilemma and how decreased working hours offers a simple solution).
OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing.
OR considering the common-sensey thought that societies have a maxmimum rate of adaptation, suspects excessive rates of technological change can lead to harms, independent of how the technology is used. (This thought is more speculative/less researched – would love to hear evidence for or against).
I’m not down or upvoting, but I will say, I hope you’re not taking this exercise too seriously...
Are we really going to analyze one person’s fiction (even if rationalist, it’s still fiction), in an attempt to gain insight into this one person’s attempt to model an entire society and its market predictions – and all of this in order to try and better judge the probability of certain futures under a number of counterfactual assumptions? Could be fun, but I wouldn’t give its results much credence.
Don’t forget Yudkowsky’s own advice about not generalizing from fictional evidence and being wary of anchoring. If I had to guess, some of his use of fiction is just an attempt to provide alternative framings and anchors to those thrust on us by popular media (more mainstream TV shows, movies etc). That doesn’t mean we should hang on his every word though.