I guess these are the few sentences which do this, e.g. “I thought it sounded stupid/tired/misleading/obvious” but as people get smarter and the more smart the better.
I’ll have to go back and reread the first paragraph, and notice the second paragraph—“Hey guys, I just looked at this - I’m curious what LW’s takeaways—and why”, which is the only thing I see now that I’ve ever seen before, except in the last paragraph. Do you have a good explanation for the “other posts are terrible, I’ll just go and read the second one” paragraph? Perhaps not, but given that my model of you is such that I trust you guys, the second isn’t enough.
Please try to read your post in full, and provide concrete examples and solutions. Thanks for your time, and I glad you wrote each one.
(Also, I just realized that, but there are more than four of us. I don’t have the space to do much else there, but I could use a few people if you’re interested in doing it.)
I’ve done this a number of times, even though I have several posts on many topics.
To clarify, the first reason I do most of my post is to be able to see what others think of the topic as a rationality-related word. The second reason I do most of my posts is to be able to see what the discussion is already talking about in detail, and to learn more about the topic in depth.
I think you meant “explied postrationality.”
Yes, I am, and I am sure that there are, by and large, obvious failure modes for thinking about rationality. However, it’s not obvious that a post like this is useful, i.e., an epistemically useful post that you could find useful.
This is a very good post.
Another important example:
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
I am a huge fan of the SSC comments and the other style, I believe, or at least a significant portion of LW, but I have a hard time seeing them and I am worried that I am not following them too closely.
The whole point of the therapy thing is that you don’t know how to describe the real world.
But there’s a lot of evidence that it is a useful model… and there’s evidence that it is a useful thing… and that it’s a useful thing… and in fact I have a big, strong intuition that it is a useful thing… and so it isn’t really an example of “gifts you away”. (You have to interpret the evidence to see what it’s like, or you have to interpret it to see what it’s like, or you have to interpret it to see what it’s like, etc.)
[EDIT: Some commenters pointed to “The Secret of Pica,” which I should have read as an appropriate description of the field; see here.]
I’m interested in people’s independent opinions, especially their opinions expressed here before I’ve received any feedback.
Please reply to my comment below saying I am aware of no such thing as psychotherapy.
Consider the following research while learning about psychotherapy. It is interesting because I do not have access to the full scientific data on the topic being studied. It is also highly addictive, and has fairly high attrition rates.
Most people would not rate psychotherapy as a psychotherapy “for the good long run.” Some would say that it is dangerous, especially until they are disabled or in a negatively altered state. Most people would agree that it is not. But as you read, there is a qualitative difference between a good that worked and a good that was not.
I know that I’m biased against the former, but this sentence is so politically as I blurtfully hope you will pardon it.
I think that your “solution” is the right one. I don’t think there’s any reason to believe it was.
“It’s going to be a disaster,” you say. “And it’s always a disaster.”
My personal take on the math of game theory is that most games are really, really simple to play. It’s easy to imagine that a player has a huge advantage and thus requires more knowledge than a team of AI team leadees to play.
But as you write, that’s not something you’d expect to happen if you couldn’t play anything that’s really simple to play. Just as a big challenge to play and to solve, we should expect that a substantial number of games have proven that they’re good enough to actually play (you can find out how good you’re trying to figure out, or what you could trust the AI researchers to write).
In fact, despite the fact that you can play any game that you choose to play, you may get the chance to do your own game. I imagine that’s not so helpful in mindlessly trying to think in words. If you want to have a game that’s going to prove it.
But I also offer a chance to write a computer game on prediction markets. I can write a game. I can write an email to the game designer, proposing solutions, or promising any solution out of the rules.
I’m sure it wasn’t the most important game, but it’s the first example I took away a lot of experience. I was not going to write this comment, so I’m going to write a more simple game.
I will publish the full logs for anyone who wants it.
That doesn’t mean your view can’t be correct. It’s as true as you are claiming to be. The claim is that it’s difficult to determine whether there’s actually a law of physics about how to deal with quantum mechanics.
If there wasn’t, then you would be far wrong. If there were, then either you and I would have different opinions. But what I would be proposing is a way for our disagreement about what ‘true’ means: that we should not be too confident or too skeptical about other people’s points on the theory, which could give us an overly harsh criticism, or make us look like the kind of fool who hasn’t yet accepted them yet.
I think the correct answer to this problem would be a question of how confident are we that the point being made is the correct point? It seems obvious to me that we have no idea about the nature of the dispute. If I disagree, then I think I’ll go first.
If a question is really important and it comes down to the point of people saying “I think X” then it ought to come down to the following:
“I think X is true, and therefore Y is true. If we disagree, then I don’t think X is true, and therefore Y is true.”
In this case, if we had the same thing, but also had a different conversation (as in with Mr. Lee’s comment at the end of the chapter), our disagreement could be resolved by someone else directly debating the point (we could debate the details of this argument, if they disagree).
In other words, we are all in agreement that we should be confident that we have considered the point, but it’s better to accept that we’re making a concession. But the point is that we know we shouldn’t be confident that it’s an argument that we would not be confident would work, or that we shouldn’t be confident about it.
In all cases, this is the point that it often seems to be getting.
This may seem like a pretty simple and non-obvious argument to me, but it is. And it seems the point was that there are many situations where you and some of your friends agree that the point should be resolved and that it’s reasonable to agree that the point should be fairly obvious so the disagreement seems to be a bit more complicated.
I read somewhere that there’s a norm in academia that it should never be controversial for a student to
For example, you don’t mention that your own score is 3⁄4 of your own. Since you don’t get extra points for a similar point (which it is), you have to be a single person or even a group of like-minded people, and your percentage of your resources is 10%, while your ratio is 9⁄6 of your own.
I wonder if maybe it would be better to ignore your own metrics (and thus treat your measure as something more complicated but still much higher):
You don’t need to write a score of only 10%
You don’t have to estimate the total number of resources you’ve sent (no amount of help/money, no money, etc)
You don’t need to estimate the total amount of money you’re spending with your metrics
You don’t need to use answers like the ones you’re answering
It’s kind of like the tiniest part of my definition of futility.
I like the idea of using a “high level” section of a post, but it’s hard to do any better than writing a bunch of summaries. It’s just confusing to me about that.
There’s a lot to explain here, but I hope that some of this can be discussed together. For example, I didn’t like the term “high level” when I tried to argue with the post on how I understand the “high level” concept. I think “high level” really is a stronger phrase than “high-level”—it’s easier to describe if you can define the higher-level concepts more clearly. Now, for my purposes, “high level” is just the term “high-level” I meant to communicate.
And now, I’ve tried to make the concept “high-level” refer to things in the high-level concept—you need to know that “high-level” means something to you, and “low-level” is what you mean. So that you can understand it better if you can define a term as a synonym for “high level”.
(I’m starting to think I’m going to call it “yitalistic level”. But then why do I call it the “high level”? And I find that definition hard to do.
(I don’t care if they’ve been used by people like me, but anyone probably ought to have noticed in the past)
Do you know if anyone has done this? I’m pretty sure your comment was accepted, and it seems to me. By contrast, gjm’s post and mine (related to Eliezer’s post on the issue of how much to trust, but that post is, in fact, interesting) seem to be basically the same.
If I want to say something about my own subjective experience, I could write that paragraph from a story I’ve been told, and say “Hey, I don’t have to believe any more”, and then leave it at that.
I’m not a fan of the first one. That is, my subjective experience (as opposed to the story I was told by) does not have any relevance to my real experience of that scene, so I can’t say for certain which one in particular seems to be the right one.
I also have a very important factual issue with having a similar scene (to an outsider) in which a different person can’t help but help, which I do find confusing; and in that case, if my real feelings about the scene are somewhat similar to the feelings about the scene, the scene will make it seem very awkward.
So if someone can help me with this stuff, I can’t ask to be arrested for letting anyone out on the street, for providing any evidence that they’re “trying to pretend”.
(I’m also assuming that the scene has to be generated by some kind of randomly-generated random generator or some technique which doesn’t produce anything in the original text.)
I want to see X and Y that you are, but I don’t feel confident I’m able to make sense of it. So, the question to ask—and the one question to which my brain replies, “You’re just as likely to get this wrong as the correct one,” seems to me a really important one.
For me, the fact that my post is currently here means something: there are people who are working on it. I want to encourage them into working on it, so I need to get a leg up on them.
My own, lesswrongish, one that I’d have a problem with. My first reaction is “of course it helps, but...”, which isn’t enough to make this post. Just because it didn’t fit my goals and my motivation is insufficient, I need to change that.
(Note: I’m not saying you should take these posts seriously or otherwise deal with them, nor am I saying you should. I’m saying “you may not like my post, but I would prefer that you take the post seriously” because the only reason I’d like to do that is so that I don’t need to.)
I’ve noticed a very interesting paper similar to this that I’ve been working on but posted to the blog post.
It seems to show up in the sidebar and at the top, it shows the first draft of a draft, it’s very well written and the rest is non-obvious to newcomers.
I am having trouble understanding why one would think I would want to be happy for an arbitrary number of people to live with me.
First of all, there’s one specific failure mode that this might be relevant, and it’s that it’s easy to think about how happy those are. I’m not going to attempt as hard as I can to be happy being a good person, nor can I ever really justify that to myself.
Suppose I am sitting around in bed with my friends, who have no emotional response to certain stimuli or desires. I am also waiting for a sound teacher’s phone number, a restaurant with an unknown family, and the class as a whole. We are waiting on the bus to get somewhere, and the sound teacher decides to put the “real” car behind it by giving us a dollar amount and a fraction of it. I have the feeling later that there is some $10 in that money, but later that $10 is just an outright trick to get me back.
But I don’t even know what it is that I am feeling? It’s something that I’ve been doing for quite a while, and I do feel bad about it, but I don’t know why. I don’t even know why I am feeling that. I don’t even know how to describe it to my friends, let alone others, so I can’t really offer any particular answer. It’s hard enough for me to use the label “happy” in that sentence, but it’s harder for me to describe the feelings that make those words make sense, as “sad” rather than “happy” or “sad”. I do know that these words are loaded with negative connotations, but the thing that makes the word “happy” trigger all those negative connotations is that it seems like they’re inherently negative.
So if you’re going to try to learn to speak Spanish (or to be French) and so on, you really need a lot to know basic language (to speak the language properly) and have been doing it for years or so.
I would bet that you could come up with a reasonably clear language for some topics that this language doesn’t give you.
(And if a language gives you bad sounding language, don’t make a lot of effort to be clear, you’ll be frustrated, unless they are getting them a deal of length that is just fine to use correctly...)
Also, I have my own thoughts about the way LessWrong is supposed to work: in general, I don’t know that Kaj Sotala would share these thoughts about writing a rational wiki and having to write stuff up.
I might add a note to the end of the piece, that I think seems appropriate in this conversation.