I’m a big fan of “TAPs” but there are a few relevant notes:
First, if I try anything I want to trigger the “do anything” loop, the links don’t run, and it’s basically impossible to go from “do anything” to “do anything” because at some point I will either have to do something to do to get ahead.
For example, the last two posts in the OP basically just made the thing in front of me more likely to start doing something, not something that would otherwise sound like “do anything” as this happens in practice, so I have decided to try implementing it. I have been waiting too long for it to really get done, but now, that has been happening.
I would also like to note that it’s easier to implement the OP if you have a link, but I do sometimes work that has no significant context and you can’t easily follow them in the nearterm.
I suspect that this comment is the wrong way to ask a question, but I’m going to use something from that thread that I think is relevant to that, and therefore I’ll use this particular thread for that.
Also, as a suggestion for people who don’t like the OP’s idea: start doing some sort of work (e.g., doing some exercises).
I should note, though, that this only works for someone who doesn’t have an instant, default solution to any of the problems in the OP. So it’s really good that it worked, and I’d just like to throw it out there.
(BTW I’m not sure what’s intended to be specific or specific, but this is the sort of stuff that I care about, and what I don’t care about because I have no idea what it is that’s wrong!)
People like to be wrong, so they find their way closer to the truth.
I think something like this is true, but the distinction is not one that makes sense in the first place. If you say that the only reason to be wrong is that it makes you look bad, then your post becomes a little weird. It would be a lot stronger if you started with, say, a post titled “The Fallacy of the Planning Fallacy” and then linked to it from elsewhere in the LW sequences.
There are a lot of cases where a claim is wrong. In the case of a post about an academic field, or an article about AI alignment, and it is kinda a bad post that people (and, presumably, their audience) don’t take to this high level. Sometimes it’s not the work itself to make such mistakes.
The claim is too weird to keep.
If you had a post that was intended to have a high impact (either useful or not), as far as I know, that was some kind of weird thing that you thought was pretty clearly wrong to say, and you had to argue with all the math and reasoning, which you knew to place yourself in the situation, which was a pretty serious problem for your audience.
I have had a few success with mine (a male, a female, …), but have never found them more interesting/interesting. I didn’t make a big out of mine—because I already had some in common with my LW friends and was expecting them to be interesting, and that would not have been the main motivation for rationality (since I’m a male)
I haven’t read everything about it (since I expect it’s still interesting and fun, but a female reader will probably have to have her own analysis to decide on)
My only reason for reading this is because I’m very skeptical that anything works or demonstrated the competence of a very smart, well-designed, rational person. I have a lot of doubts that something even more interesting or enjoyable (something that, if it’s successful, would require some effort to replicate) could exist in the wild. What I am skeptical is that I haven’t found anything that I could use the art of rationality to demonstrate… but it seems like these “solutions” have already been tried and failed so far… so it seems likely that if I just tried it and failed, wouldn’t I end up believing true things about it being impossible?
Modafinil helps somewhat.
Reference: Gwern’s post on Modafinil
I’m a big fan of “TAPs” but there are a few relevant notes:
First, if I try anything I want to trigger the “do anything” loop, the links don’t run, and it’s basically impossible to go from “do anything” to “do anything” because at some point I will either have to do something to do to get ahead.
For example, the last two posts in the OP basically just made the thing in front of me more likely to start doing something, not something that would otherwise sound like “do anything” as this happens in practice, so I have decided to try implementing it. I have been waiting too long for it to really get done, but now, that has been happening.
I would also like to note that it’s easier to implement the OP if you have a link, but I do sometimes work that has no significant context and you can’t easily follow them in the nearterm.
I suspect that this comment is the wrong way to ask a question, but I’m going to use something from that thread that I think is relevant to that, and therefore I’ll use this particular thread for that.
Also, as a suggestion for people who don’t like the OP’s idea: start doing some sort of work (e.g., doing some exercises).
I should note, though, that this only works for someone who doesn’t have an instant, default solution to any of the problems in the OP. So it’s really good that it worked, and I’d just like to throw it out there.
(BTW I’m not sure what’s intended to be specific or specific, but this is the sort of stuff that I care about, and what I don’t care about because I have no idea what it is that’s wrong!)
Data point: even with the name of the account it took me an embarrassingly long time to figure out that this was actually written by GPT2 (at least, I’m assuming it is). Related: https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/
People like to be wrong, so they find their way closer to the truth.
I think something like this is true, but the distinction is not one that makes sense in the first place. If you say that the only reason to be wrong is that it makes you look bad, then your post becomes a little weird. It would be a lot stronger if you started with, say, a post titled “The Fallacy of the Planning Fallacy” and then linked to it from elsewhere in the LW sequences.
There are a lot of cases where a claim is wrong. In the case of a post about an academic field, or an article about AI alignment, and it is kinda a bad post that people (and, presumably, their audience) don’t take to this high level. Sometimes it’s not the work itself to make such mistakes.
The claim is too weird to keep.
If you had a post that was intended to have a high impact (either useful or not), as far as I know, that was some kind of weird thing that you thought was pretty clearly wrong to say, and you had to argue with all the math and reasoning, which you knew to place yourself in the situation, which was a pretty serious problem for your audience.
I have had a few success with mine (a male, a female, …), but have never found them more interesting/interesting. I didn’t make a big out of mine—because I already had some in common with my LW friends and was expecting them to be interesting, and that would not have been the main motivation for rationality (since I’m a male)
I haven’t read everything about it (since I expect it’s still interesting and fun, but a female reader will probably have to have her own analysis to decide on)
My only reason for reading this is because I’m very skeptical that anything works or demonstrated the competence of a very smart, well-designed, rational person. I have a lot of doubts that something even more interesting or enjoyable (something that, if it’s successful, would require some effort to replicate) could exist in the wild. What I am skeptical is that I haven’t found anything that I could use the art of rationality to demonstrate… but it seems like these “solutions” have already been tried and failed so far… so it seems likely that if I just tried it and failed, wouldn’t I end up believing true things about it being impossible?