I think I need more practice talking with people in real time (about intellectual topics). (I’ve gotten much more used to text chat/comments, which I like because it puts less time pressure on me to think and respond quickly, but I feel like I now incur a large cost due to excessively shying away from talking to people, hence the desire for practice.) If anyone wants to have a voice chat with me about a topic that I’m interested in (see my recent post/comment history to get a sense), please contact me via PM.
Wei Dai
So like, do you distrust writers using substack? Because substack writers can just ban people from commenting. Or more concretely, do you distrust Scott to garden his own space on ACX?
It’s normally out of my mind, but whenever I’m reminded of it, I’m like “damn, I wonder how many mistaken articles I read and didn’t realize it because the author banned or discouraged their best (would be) critiques.” (Substack has other problems though like lack of karma that makes it hard to find good comments anyway, which I would want to fix first.)
Giving authors the ability to ban people they don’t want commenting is so common that it feels like a Chesterton’s Fence to me.
It could also just be a race to the bottom to appeal to unhealthy motivation, kind of like YouTube creating Shorts to compete with TikTok.
Comment threads are conversations! If you have one person in a conversation who can’t see other participants, everything gets confusing and weird.
The additional confusion seems pretty minimal, if the muted comments are clearly marked so others are aware that the author can’t see them. (Compare to the baseline confusion where I’m already pretty unsure who has read which other comments.)
I just don’t get how this is worse than making it so that certain perspectives are completely missing from the comments.
I really don’t get the psychology of people who won’t use a site without being able to unilaterally ban people (or rather I can only think of uncharitable hypotheses). Why can’t they just ignore those they don’t want to engage with, maybe with the help of a mute or ignore feature (which can also mark the ignored comments/threads in some way to notify others)?
Gemini Pro’s verdict on my feature idea (after asking it to be less fawning): The refined “Mute-and-Flag” system is a functional alternative, as it solves the author’s personal need to be shielded from unwanted interactions and notifications.
The continued preference for a unilateral block, then, is not driven by a personal requirement that the Mute-and-Flag system fails to meet. Instead, it stems from a differing philosophy about an author’s role and responsibilities for the space they create. The conflict centers on whether an author is simply a participant who can disengage personally, or if they are the primary curator of the conversational environment they initiate.
An author who prefers a block is often motivated by this latter role. They may want to actively “garden” the discussion to maintain quality for all readers, prevent reputational damage by proxy, or because they lack confidence in the community’s ability to effectively moderate a disruptive user, even with flags.
Ultimately, the choice between these systems reflects a platform’s core design trade-off. A Mute-and-Flag system prioritizes public transparency and community-led moderation. A Unilateral Block system prioritizes authorial control and the ability to directly shape a discussion environment, while accepting the inherent risks of censorship and abuse.
My response to this is that I don’t trust people to garden their own space, along with other reasons to dislike the ban system. I’m not going to leave LW over it though, but just be annoyed and disappointed at humanity whenever I’m reminded of it.
It’s a localized silencing, which discourages criticism (beyond just the banned critic) and makes remaining criticism harder to find, and yes makes it harder to tell that the author is ignoring critics. If it’s not effective at discouraging or hiding criticism, then how can it have any perceived benefits for the author? It’s gotta have some kind of substantive effect, right? See also this.
I think giving people the right and responsibility to unilaterally ban commenters on their posts is demanding too much of people’s rationality, forcing them to make evaluations when they’re among most likely to be biased, and tempting them with the power to silence their harshest or most effective critics. I personally don’t trust myself to do this and have basically committed to not ban anyone or even delete any comments that aren’t obvious spam, and kind of don’t trust others who would trust themselves to do this.
My argument is roughly that religions uniquely provide a source of meaning, community, and life guidance not available elsewhere
Why is it good to obtain a source of meaning, if it is not based on sound epistemic foundations? Is obtaining an arbitrary “meaning” better than living without one or going with an “interim meaning of life” like “maximize option value while looking for a philosophically sound source of normativity”?
Thanks for letting me know. Is there anything on my list that you don’t think is a good idea or probably won’t implement, in which case I might start working on them myself, e.g. as a userscript? Especially #5, which is also useful for other reasons, like archiving and searching.
What do people think about having more AI features on LW? (Any existing plans for this?) For example:
AI summary of a poster’s profile, that answers “what should I know about this person before I reply to them”, including things like their background, positions on major LW-relevant issues, distinctive ideas, etc., extracted from their post/comment history and/or bio links.
“Explain this passage/comment” based on context and related posts, similar to X’s “explain this tweet” feature, which I’ve often found useful.
“Critique this draft post/comment.” Am I making any obvious mistakes or clearly misunderstanding something? (I’ve been doing a lot of this manually, using AI chatbots.)
“What might X think about this?”
Have a way to quickly copy all of someone’s posts/comments into the clipboard, or download as a file (to paste into an external AI).
I’ve been thinking about doing some of this myself (e.g., update my old script for loading all of someone’s post/comment history into one page), but of course would like to see official implementations, if that seems like a good idea.
This contradicts my position in Some Thoughts on Metaphilosophy. What about that post do you find unconvincing, or what is your own argument for “philosophy being insoluble”?
I’m not saying that my assessment of it is inarguably correct (indeed, given that mainstream philosophy isn’t seriously discredited yet, reasonable people clearly can disagree), but if your conclusions are different, I’d like to know why.
It’s mainly because when I’m (seemingly) making philosophical progress myself, e.g., this and this, or when I see other people making apparent philosophical progress, it looks more like “doing what most philosophers do” than “getting feedback from reality”.
Perhaps more seriously, the philosophers who got a temporary manpower and influence boost from the invention of math and science should have worked much harder to solve metaphilosophy, while they had the advantage.
It seems to me that values have been a main focus of philosophy for a long time, with moral philosophy (or perhaps meta-ethics if the topic is “what values are”) devoted to it and discussed frequently both in academia and out, whereas metaphilosophy has received much less attention. This implies that we know progress on understanding values is probably pretty hard on the current margins, whereas there’s a lot more uncertainty about the difficulty of metaphilosophy. Solving the latter would also be of greater utility, since it makes solving all other philosophical problems easier, not just values. I’m curious about the rationale behind your suggestion.
An example of a long-standing philosophical problem that could eventually be solved in this way is the problem of consciousness: if we’re eventually able to build artificial brains and “upload” ourselves, by testing different designs we’d be able to figure out which material features give rise to qualia experiences, and by what mechanisms.
I think this will help, but won’t solve the whole problem by itself, and we’ll still need to decide between competing answers without direct feedback from reality to help us choose. Like today, there are people who deny the existence of qualia altogether, and think it’s an illusion or some such, so I imagine there will also be people in the future who claim that the material features you claim to give rise to qualia experiences, merely give rise to reports of qualia experiences.
We do receive feedback on this from reality, albeit slowly — through cultural evolution/natural selection. To the extent that this filter isn’t particularly strict, within the range it allows variation will probably remain arbitrary.
So within this range, I still have to figure out what my values should be, right? Is your position that it’s entirely arbitrary, and any answer is as good as another (within the range)? How do I know this is true? What feedback from reality can I use to decide between “questions without feedback from reality can only be answered arbitrarily” and “there’s another way to (very slowly) answer such questions, by doing what most philosophers do”, or is this meta question also arbitrary (in which case your position seems to be self-undermining, in a way similar to logical positivism)?
I have no idea whether marginal progress on this would be good or bad
Is it because of one of the reasons on this list, or something else?
Math and science as origin sins.
From Some Thoughts on Metaphilosophy:
Philosophy as meta problem solving Given that philosophy is extremely slow, it makes sense to use it to solve meta problems (i.e., finding faster ways to handle some class of problems) instead of object level problems. This is exactly what happened historically. Instead of using philosophy to solve individual scientific problems (natural philosophy) we use it to solve science as a methodological problem (philosophy of science). Instead of using philosophy to solve individual math problems, we use it to solve logic and philosophy of math. [...] Instead of using philosophy to solve individual philosophical problems, we can try to use it to solve metaphilosophy.
It occurred to me that from the perspective of longtermist differential intellectual progress, it was a bad idea to invent things like logic, mathematical proofs, and scientific methodologies, because it permanently accelerated the wrong things (scientific and technological progress) while giving philosophy only a temporary boost (by empowering the groups that invented those things, which had better than average philosophical competence, to spread their culture/influence). Now we face the rise of China and/or AIs, both of which seem likely (or at least plausibly) to be technologically and scientifically (but not philosophically) competent, perhaps in part as a result of technological/scientific (but not philosophical) competence having been made legible/copyable by earlier philosophers.
If only they’d solved metaphilosophy first, or kept their philosophy of math/science advances secret! (This is of course not entirely serious, in case that’s not clear.)
I am essentially a preference utilitarian
Want to try answering my questions/problems about preference utilitarianism?
Maybe I would state my first question above a little differently today: Certain decision theories (such as the UDT/FDT/LDT family) already incorporate some preference-utilitarian-like intuitions, by suggesting that taking certain other agents’ preferences into account when making certain decisions is a good idea, if e.g. this is logically correlated with them taking your preferences into account. Does preference utilitarianism go beyond this, and say that you should take their preferences into account even if there is no decision theoretic reason to do so, as a matter of pure axiology (values / utility function)? Do you then take their preferences into account again as part of decision theory, or do you adopt a decision theory which denies or ignores such correlations/linkages/reciprocities (e.g., by judging them to be illusions or mistakes or some such)? Or does your preference utilitarianism do something else, like deny the division between decision theory and axiology? Also does your utility function contain non-preference-utilitarian elements, i.e., idiosyncratic preferences that aren’t about satisfying other agents’ preferences, and if so how do you choose the weights between your own preferences and other agents’?
(I guess this question/objection also applies to hedonic utilitarianism, to a somewhat lesser degree, because if a hedonic utilitarian comes across a hedonic egoist, he would also “double count” the latter’s hedons, once in his own utility function, and once again if his decision theory recommends taking the latter’s preferences into account. Another alternative that avoids this “double counting” is axiological egoism + some sort of advanced/cooperative decision theory, but then selfish values has its own problems. So my own position on is topic is one of high confusion and uncertainty.)
Sorry about the delayed reply. I’ve been thinking about how to respond. One of my worries is that human philosophy is path dependent, or another way of saying this is that we’re prone to accepting wrong philosophical ideas/arguments and then it’s hard to talk us out of them. The split of western philosophy into analytical and continental traditions seems to be an instance of this, then even within analytical philosophy, academic philosophers would strongly disagree with each other and each be confident in their own positions and rarely get talked out of them. I think/hope that humans collectively can still make philosophical progress over time (in some mysterious way that I wish I understood), if we’re left to our own devices but the process seems pretty fragile and probably can’t withstand much external optimization pressure.
On formalizations, I agree they’ve stood the test of time in your sense, but is that enough to build them into AI? We can see that they wrong on some questions, but can’t formally characterize the domain in which they are right. And even if we could, I don’t know why we’d muddle through… What if we built AI based on Debate, but used Newtonian physics to answer physics queries instead of human judgment, or the humans are pretty bad at answering physics related questions (including meta questions like how to do science)? That would be pretty disastrous, especially if there are any adversaries in the environment, right?
MacAskill is probably the most prominent, with his “value lock-in” and “long reflection”, but in general the notion of philosophical confusion/inadequacy seems a common component of various AI risk cases. I’ve been particularly impressed by John Wentworth.
That’s true, but neither of them have talked about the more general problem “maybe humans/AIs won’t be philosophically competent enough, so we need to figure out how to improve human/AI philosophically competence” or at least haven’t said this publicly or framed their positions this way.
The point is that it’s impossible to do useful philosophy without close and constant contact with reality.
I see, but what if there are certain problems which by their nature just don’t have clear and quick feedback from reality? One of my ideas about metaphilosophy is that this is a defining feature of philosophical problems or what makes a problem more “philosophical”. Like for example, what should my intrinsic (as opposed to instrumental) values be? How would I get feedback from reality about this? I think we can probably still make progress on these types of questions, just very slowly. If your position is that we can’t make any progress at all, then 1) how do you know we’re not just making progress slowly and 2) what should we do? Just ignore them? Try to live our lives and not think about them?
Interesting. Who are they and what approaches are they taking? Have they said anything publicly about working on this, and if not, why?
Each muted comment/thread is marked/flagged by an icon, color, or text, to indicate to readers that the OP author can’t see it, and if you reply to it, your reply will also be hidden from the author.