I’m pretty sure “qualia do not exist” is an extreme fringe position. You seem to be under the impression that materialists deny qualia, which is not the case.
That said, this is a decent argument against the position that qualia do not exist.
I’m pretty sure “qualia do not exist” is an extreme fringe position. You seem to be under the impression that materialists deny qualia, which is not the case.
That said, this is a decent argument against the position that qualia do not exist.
I think having a schelling day for trying weird stuff is good, and April Fool’s day seems fine. I don’r have nearly as strong a feeling as you seem to that April Fool’s jokes are never partially serious.
All the smart trans girls I know were also smart prior to HRT.
People seem to be blurring the difference between “The human race will probably survive the creation of a superintelligent AI” and “This isn’t even something worth being concerned about.” Based on a quick google search, Zuckerberg denies that there’s even a chance of existential risks here, whereas I’m fairly certain Hanson thinks there’s at least some.
I think it’s fairly clear that most skeptics who have engaged with the arguments to any extent at all are closer to the “probably survive” part of the spectrum than the “not worth being concerned about” part.
I mean, surely Eliezer is going to have somewhat dath-ilan typical preferences, having grown up there.
But personally, I think having such a standard is both unreasonable and inconsistent with the implicit standard set by essays from Yudkowsky and other MIRI people.
I think this is largely coming from an attempt to use approachable examples? I could believe that there were times when MIRI thought that even getting something as good as ChatGPT might be hard, in which case they should update, but I don’t think they ever believed that something as good as ChatGPT is clearly sufficient. I certainly never believed that, at least.
I think a lot of travel expenses?
I think a comment “just asking for people to withhold judgement” would not be especially downvoted. I think the comments in which you’ve asked people to withhold judgement include other incredibly toxic behavior.
I think both emotions are helpful at motivating me.
I feel like Project Lawful, as well as many of Lintamande’s other glowfic since then, have given me a whole lot deeper an understanding of… a collection of virtues including honor, honesty, trustworthiness, etc, which I now mostly think of collectively as “Law”.
I think this has been pretty valuable for me on an intellectual level—I think, if you show me some sort of deontological rule, I’m going to give a better account of why/whether it’s a good idea to follow it than I would have before I read any glowfic.
It’s difficult for me to separate how much of that is due to Project Lawful in particular, because ultimately I’ve just read a large body of work which all had some amount of training data showing a particular sort of thought pattern which I’ve since learned. But I think this particular fragment of the rationalist community has given me some valuable new ideas, and it’d be great to figure out a good way of acknowledging that.
While the concept that looking at the truth even when it hurts is important isn’t revolutionary in the community, I think this post gave me a much more concrete model of the benefits. Sure, I knew about the abstract arguments that facing the truth is valuable, but I don’t know if I’d have identified it as an essential skill for starting a company, or as being a critical component of staying in a bad relationship. (I think my model of bad relationships was that people knew leaving was a good idea, but were unable to act on that information—but in retrospect inability to even consider it totally might be what’s going on some of the time.)
I don’t think security mindset means “look for flaws.” That’s ordinary paranoia. Security mindset is something closer to “you better have a really good reason to believe that there aren’t any flaws whatsoever.” My model is something like “A hard part of developing an alignment plan is figuring out how to ensure there aren’t any flaws, and coming up with flawed clever schemes isn’t very useful for that. Once we know how to make robust systems, it’ll be more clear to us whether we should go for melting GPUs or simulating researchers or whatnot.”
That said, I have a lot of respect for the idea that coming up with clever schemes is potentially more dignified than shooting everything down, even if clever schemes are unlikely to help much. I respect carado a lot for doing the brainstorming.
How did you decide on the image?
I would very much like to read your attempt at conveying the core thing—if nothing else, it’ll give another angle from which to try to grasp it.
Aside from the fact that I just find this idea extremely hilarious, it seems like a very good idea to me to try to convince people who might be able to make progress on the problem to try. Whether literally sending Terry Tao 10 million dollars is the best way to go about that seems dubious, but the general strategy seems important.
I’d argue the sequences / HPMOR / whatever were versions of that strategy to some extent and seem to have had notable impact.
So if a UFO lands in your backyard and aliens ask if you if you want to go on a magical (but not particularly instrumental) space adventure with them, I think it’s reasonable to very politely decline, and get back to work solving alignment.
I think I’d probably go for that, actually, if there isn’t some specific reason to very strongly doubt it could possibly help? It seems somewhat more likely that I’ll end up decisive via space adventure than by mundane means, even if there’s no obvious way the space adventure will contribute.
This is different if you’re already in a position where you’re making substantial progress though.
Can this be partially fixed by using uBlock Origin or whatever to hide certain elements of the page? I’d expect it to help at least imperfectly, not sure if you’ve tried it.
I don’t think the point of the detailed stories is that they strongly expect that particular thing to happen? It’s just useful to have a concrete possibility in mind.
I think it’s pretty reasonable to choose to do something a little out-there / funny on April Fool’s, even if there are additional more serious reasons to do it.
I think temporarily front paging urgent, actionable information makes sense.