less wrong should add a “confirms my biases” reaction. like to put under stuff that idk if it’s especially objectively true or good but I love hearing that shit
Warty
- I liked the dashed-words in planecrash but many here felt jarring: “work-site”, “processor-ticks”. why do we have to know these are single words in robot language. but not eg “space station” 
- sounds miserable, have they looked into not being violinists? 
- she “disagrees with catastrophic framings of AI risk.” - that’s not very consistent with my understanding of the words “endorsed IABIED” from OP 
- Feels true to me, but what’s the distinction between theoretical and non-theoretical arguments? 
 Consider the mythological case of the calculation if the atomic bomb would ignite the atmosphere. I guess the concern guided the “policy” to perform the calculation. And if it came out as 50% of omnicide, atomic bombs would be prevented, despite the lack of a spectacular warning shot.
 Policy has also ever been guided by arguments with little related maths, for example, the MAKING FEDERAL ARCHITECTURE BEAUTIFUL AGAIN executive order.
 Maybe the problem with AI existential risk arguments is that they’re not very convincing.
- Common rationalist take is that people used to really believe their religions (and now it’s fake). - Somehow I can’t not doubt that 1st century people unironically believed the Bethlehem census story. They would be familiar with state logistics of the time! - They must’ve been like yeah we made it up for Messiah lore lmao 
- This seems pretty useless. - Step 1 won’t work for distinguishing good from bad, current tech is not capable of this.. (I guessed that it would be biased to be negative since it’s kinda suggested in the prompt, but from other comments it seems it still glazes) - As pointed out by other comments, step 2 would reject much/most real knowledge progress. 
- “Support” is kind of weak. To make it like the CAIS statement, maybe “I largely agree with IABIED”. Or “I ~agree with IABIED”. Or “I agree* with IABIED [bring your own footnotes]” 🙂 
 ”Statement of ~agreement with IABIED”
- I never got that cause is deciding to smoke much of an update after you already detected an urge to smoke? edt looks simpler so it should be correct 
- real, thanks. both look pretty garbage I guess no choice but to drink it 
- I started using Forfeit from this post, and it worked amazingly for me, but now it turned into subscription AI slop. Is there an alternative? 
- I used that once and it didn’t work, aligned-by-default universe 
- that’s a trick to make me be like them! 
 (I listened to some of that michael huemer talk and it seemed pretty dumb)
- I would say it’s perhaps indicative of a problem with academic philosophy. Unless that 62% is mostly moral corporalists, then it’s fine by me if they insist that “some moral propositions are objectively true or false”, I guess. 
- I don’t recall saying that recently, though it’s true. I don’t know what you’re getting at. 
- I am making guesses about what you might be saying, because you are being unclear. - I was responding to your correction of my definition of moral realism. I somewhat jokingly expressed shame for defining it idiosyncratically. - Well,.it doesn’t, and research will tell you that. - It can still be true of my impressions of it, like every time I saw someone arguing for moral realism. - Which debate? - I think it was this one, regretfully I’m being forced to embed it in my reply. 
- Hmm yea gameability might not be so interesting of a property of metrics as I’ve expressed. 
 (though I still feel there is something in there. Fixing your calibration chart after the fact by predicting one-sided- coinsdice is maybe a lot like taking a foot off the bathroom scale. But, for example, predicting every event as a constant p%, is that even cheating in the calibration game? Though neither of these directly applies to the case of prediction market platforms)
- Most shameful of me to use someone’s term and define it as my beef with them. In my impressions, moral realism has also always involved moral non-corporalism if you will. As long as morality is safely stored in animal bodies, I’m fine with that. 
 The one in the youtube debate identified as a moral non-realist. But you see, his approach to the subject was different from mine, and that is a problem.
 I think there more or less is a rationalist-lesswrongist view of what morality is, shared not by all but most rationalists (I wanted to say it’s explained in the sequences, but suspiciously I can’t find it in there).
- has anyone looked into the “philosophers believe in moral realism” problem? (in the sense of, morality is not physically contained in animal bodies and human-created artifacts) - I saw a debate on youtube with Michael Huemer guy but it was with another academic philosopher. Was there ever an exchange recorded between a moral realist philosopher and a rationalist-lesswrongist? 
as a gaygp victim thank you for your service