It starts awesome, with imagination stuff, but then goes down addressing the local PUA crap. The comments, some are very insightful on the imagination and such, but the top is about the PUA crap. I actually recall I wanted to link it few years back, before I started posting much, but searched for some other link because I did not want the PUA crap.
Honestly, it would have been a lot better if Yvain started his own blog, and over time built up reader base. But few people have, i dunno, arrogance to do that (I recall he wrote that he underestimates himself, that may be why) and so we are stuck with primarily the people that overestimate themselves blogging, starting communities, etc.
Well then he should write for his blog, and sometimes have it cross posted here, rather than write for LW audience. Do you seriously need to add extra misogynistic nonsense and discuss as evidence what the borderline sociopathic PUA community thinks about women, into an otherwise good post referencing a highly interesting study by Galton? Do you really need to go this far to please online white nerdy sexually frustrated male trash?
I’ve been avoiding this thread so far because I’m kind of uncomfortable with compliments, but luckily it’s descended into insults and I’m pretty okay with those.
Yes, I have a blog. I write blog posts much more often than I write Less Wrong posts (although they’re much lower quality and scatterbrained in all senses of the word). Sometimes I want to say something about rationality, and since I happen to know of this site that’s totally all about rationality with a readership hundreds of times greater than my blog, I post it here instead of (or in addition to) my blog. I promise you I didn’t just add the PUA reference for “a Less Wrong audience”; in fact, knowing what I know all these months later, I would have specifically avoided even mentioning it for exactly the reason that’s happening right now.
I have written about 150 posts for Less Wrong, and about 1200 in my blog. Of those, I can think of three that tangentially reference pick-up artistry as an example of something, and zero that are entirely about PUA or which express explicit support for it. According to my blog tagging system, three posts is slightly more than “cartography” or “fishmen” (two posts each), but still well below “demons” (fourteen posts). I don’t think it’s unreasonable to mention a movement with some really interesting psychology behind it about 50% more than I mention hypothetical half-man half-fish entities, or a quarter as often as I mention malevolent disembodied spirits.
More importantly, now that I’m talking to you...why is your username “private_messaging”?
Originally made this account to message some people privately.
Can you explain why the first thing to update after the Galton’s amazing study into imagination, was your opinion on women in general as determined by PUA’s opinion on women vs women opinion on women (the balance of conflicting opinions)? Also, btw, it is in itself a great example of biased cognition: you run across some fact, and you update selectively; the fact should lower your weight for anyone’s evaluation of anyone, but instead it just lowers the weight for women’s evaluation of women.
Also, while I am sure that you did not consciously add it just for LW audience, if you were writing for a more general audience it does seem reasonable to assume—given that you are generally a good writer—that you would not include this sort of ‘example’ of application of the findings of Galton.
And lest I sound chauvinistic, the same is certainly true of men. I hear a lot of bad things said about men (especially with reference to what they want romantically) that I wouldn’t dream of applying to myself, my close friends, or to any man I know. But they’re so common and so well-supported that I have excellent reason to believe they’re true.
Does that really sound like someone who is doing a biased, partial update?
The PUA’s opinion on women was, nonetheless, not discounted for the typical mind fallacy. (Maybe the idea is that typical mind fallacy doesn’t work across genders or something, which would be rather interesting hypothesis, but, alas, unsupported)
Yes, I have a blog. I write blog posts much more often than I write Less Wrong posts (although they’re much lower quality and scatterbrained in all senses of the word).
Write higher quality, or make 2 sections, 1 good, 1 random. Write for general audience, i.e. no awful LW jargon and LW terminology misuse (‘rational’ actually means something, and so does ‘bayesian’). Cross post here. Come on, you said before, in calibrate your self assessments , that you have relatively low opinion on yourself.
Sometimes I want to say something about rationality, and since I happen to know of this site that’s totally all about rationality with a readership hundreds of times greater than my blog,
it’s Eliezer’s former blog, hurr, durr, it’s people who didn’t cringe too hard on stuff like http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ . He got those readers how? He split off from Hansen. I do have high opinion of you overall. Much higher than I have of EY.
I hope I would downvote any comment containing the judgement-words “nonsense”, “sociopathic” and “trash” (referring to a subset of the LW readership) regardless of the position being advocated. The book Non-violent Communication advises making observations and expressing feelings, but avoiding rendering judgments. A “judgement” can be defined as a phrase or statement that can be expected to diminish the status or moral standing of a person or group.
Parenthetically, it has been proposed that one of the ways online forums unravel over time is that a few people who like making strong judgments show up and get into long conversations with each other, which tends to discourage participants for whom the strong judgments distract from their reasons for participating.
The judgements are very often instrumentally useful. Also, I do happen to see this subset of the users on internet every bit as negatively as I said, and so do many other people who usually don’t tell anything about it, and in so much as I do not think that seeing it more positively would be instrumentally useful, it’d not be quite honest to just see it this way and never say.
edit: also, a backgrounder. I am a game developer. We see various misogynistic crap every damn day (if looking into any online communication system). Also, as on the PUA, they see people (women) as objects, seek sexual promiscuity and shallow relationships, are deliberately manipulative, etc, etc. scoring more than enough points on a sociopathy traits list for a diagnosis (really only not scoring the positive points like charisma). What they think about women being taken as likely true is clearly a very poor example of evidence for a post about typical mind fallacy.
It actually is a perfect example of how LW is interested in science:
There is the fact that some people have no mental imagery, but live totally normal lives. That’s amazing! They’re more different than you usually imagine scifi aliens to be! And yet there is no obvious difference. It is awesome. How does that even work? Do they have mental imagery somewhere inside but no reflection on it? Etc, etc etc.
And the first thing that was done with this awesome fact here, was ‘update’ in the direction of trusting more the PUA community’s opinion on women, rather than women themselves, and that was done by author. That’s not even a sufficiently complete update, because the PUA community—especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it—is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.
This, cousin_it, is the case example why you shouldn’t be writing good work for LW. Some time back you were on verge of something cool—perhaps even proving that defining the real world ‘utility’ is incredibly computationally expensive for UDT. Instead, well, yeah, there’s the local ‘consensus’ on the AI behaviour and you explore for the potential confirmations of it.
the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with… [an] unscientific approach to data collection
What ever data on physiology nazis collected correctly, we are relying on today. Even when very bad guys collect data properly the data is usable. When it’s on-line bragging by people fascinated with ‘negs’… not so much. It is a required condition that data is badly collected; the guys trying to be sociopaths does not suffice.
Some time back you were on verge of something cool—perhaps even proving that defining the real world ‘utility’ is incredibly computationally expensive for UDT. Instead, well, yeah, there’s the local ‘consensus’ on the AI behaviour and you explore for the potential confirmations of it.
You seem to be saying: “you were close to realizing this problem was unsolvable, but instead you decided to spend your time exploring possible solutions.”
Generally, you seem to be continually frustrated about something to do with wireheading, but you’ve never really made your position clear, and I can’t tell where it is coming from. Yes, it is easy to build systems which tear themselves to pieces, literally or internally. Do you have any more substantive observation? We see a path to building systems which have values over the real world. It is full of difficulties, but the wireheading problems seem understood and approachable / resolved. Can you clarify what you are talking about, in the context of UDT?
We see a path to building systems which have values over the real world.
The path he sees has values over internal model, but internal model is perfect AND it is faster than the real world, which stretches it a fair lot if you ask me. It’s not really a path, he’s simply using “sufficiently advanced model is indistinguishable from the real thing”. And we still can’t define what paperclips are if we don’t know the exact model that will be used, as the definition is only meaningful in context of a model.
The objection I have is that it is a: unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases), b: very difficult or impossible to define values over the real world, and c: values over real world are necessary for the doomsday scenario. If this can be narrowed down, then there’s precisely the bit of AI architecture that has to be avoided.
We humans are messy creatures. It is very plausible (in light of potential irreducibility of ‘values over real world’) that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies (when the model-prediction of the senses does not match the senses), resulting in learned preference not to lose correspondence between model and world, in place of straightforward “I value real paperclips therefore I value having a good model of the world” which looks suspiciously simple and poorly matches the observations (no matter how much you tell yourself you value real paperclips, you may procrastinate).
edit: and if I don’t make my position clear, it looks so because I am opposed to fuzzy ill defined woo where the distinction between models and worlds is poorly defined and the intelligence is a monolithic blob. It’s hard to define an objection to an ill defined idea which always off-shoots some anthropomorphic idea (e.g. wireheading gets replaced with real world goal to have a physical wire in a physical head that is to be kept alive with the wire).
It is very plausible [...] that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies [...], resulting in learned preference not to lose correspondence between model and world
Generally correct; we learn to value good models, because they are more useful than bad models. We want rewards, therefore we want to have good models, therefore we are interested in the world out there. (For a reductionist, there must be a mechanism explaining why and how we care about the world.)
Technically, sometimes the most correct model is not the most rewarded model. For example it may be better to believe a lie and be socially rewarded by members of my tribe who share the belief, than to have a true belief that gets me killed by them. There may be other situations, not necessarily social, where the perfect knowledge is out of reach, and a better approximation may be in the “valley of bad rationality”.
it is unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases) [...] there’s precisely the bit of AI architecture that has to be avoided.
In other words, make an AI that only cares about what is inside the box, and it will not try to get out of the box.
That assumes that you will feed the AI all the necessary data, and verify that the data is correct and complete, because the AI will be just as happy with any kind of data. If you give an incorrect information to AI, the AI will not care about it, because it has no definition of “incorrect”; even in situations where AI is smarter than you and could have noticed an error that you didn’t notice. In other words, you are responsible for giving AI the correct model, and the AI will not help you with this, because AI does not care about correctness of the model.
You put it backwards.… making AI that cares about truly real stuff as the prime drive is likely impossible and certainly we don’t know how to do that nor need to. edit: i.e. You don’t have to sit and work and work and work and find how to make some positronic mind not care about the real world. You get this by simply omitting some mission-impossible work. Specifying what you want, in some form, is unavoidable.
Regarding verification, you can have the AI search for code that predicts the input data the best, and then if you are falsifying the data the code will include a model of your falsifications.
And the first thing that was done with this awesome fact here, was ‘update’ in the direction of trusting more the PUA community’s opinion on women, rather than women themselves, and that was done by author. That’s not even a sufficiently complete update, because the PUA community—especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it—is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.
This is a really good point …
This, cousin_it, is the case example why you shouldn’t be writing good work for LW.
… which utterly fails to establish the claim that you attempt to use it for.
… which utterly fails to establish the claim that you attempt to use it for.
Context, man, context. cousin_it’s misgivings are about the low local standards. This article is precisely a good example of such low local standards—and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.
edit: also on the ‘good point’: this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It’s just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.
Yvain’s stuff is highly linkable elsewhere. His article is the go-to link for typical mind fallacy, for example.
It starts awesome, with imagination stuff, but then goes down addressing the local PUA crap. The comments, some are very insightful on the imagination and such, but the top is about the PUA crap. I actually recall I wanted to link it few years back, before I started posting much, but searched for some other link because I did not want the PUA crap.
Honestly, it would have been a lot better if Yvain started his own blog, and over time built up reader base. But few people have, i dunno, arrogance to do that (I recall he wrote that he underestimates himself, that may be why) and so we are stuck with primarily the people that overestimate themselves blogging, starting communities, etc.
He has one! http://squid314.livejournal.com/
Well then he should write for his blog, and sometimes have it cross posted here, rather than write for LW audience. Do you seriously need to add extra misogynistic nonsense and discuss as evidence what the borderline sociopathic PUA community thinks about women, into an otherwise good post referencing a highly interesting study by Galton? Do you really need to go this far to please online white nerdy sexually frustrated male trash?
I’ve been avoiding this thread so far because I’m kind of uncomfortable with compliments, but luckily it’s descended into insults and I’m pretty okay with those.
Yes, I have a blog. I write blog posts much more often than I write Less Wrong posts (although they’re much lower quality and scatterbrained in all senses of the word). Sometimes I want to say something about rationality, and since I happen to know of this site that’s totally all about rationality with a readership hundreds of times greater than my blog, I post it here instead of (or in addition to) my blog. I promise you I didn’t just add the PUA reference for “a Less Wrong audience”; in fact, knowing what I know all these months later, I would have specifically avoided even mentioning it for exactly the reason that’s happening right now.
I have written about 150 posts for Less Wrong, and about 1200 in my blog. Of those, I can think of three that tangentially reference pick-up artistry as an example of something, and zero that are entirely about PUA or which express explicit support for it. According to my blog tagging system, three posts is slightly more than “cartography” or “fishmen” (two posts each), but still well below “demons” (fourteen posts). I don’t think it’s unreasonable to mention a movement with some really interesting psychology behind it about 50% more than I mention hypothetical half-man half-fish entities, or a quarter as often as I mention malevolent disembodied spirits.
More importantly, now that I’m talking to you...why is your username “private_messaging”?
Originally made this account to message some people privately.
Can you explain why the first thing to update after the Galton’s amazing study into imagination, was your opinion on women in general as determined by PUA’s opinion on women vs women opinion on women (the balance of conflicting opinions)? Also, btw, it is in itself a great example of biased cognition: you run across some fact, and you update selectively; the fact should lower your weight for anyone’s evaluation of anyone, but instead it just lowers the weight for women’s evaluation of women.
Also, while I am sure that you did not consciously add it just for LW audience, if you were writing for a more general audience it does seem reasonable to assume—given that you are generally a good writer—that you would not include this sort of ‘example’ of application of the findings of Galton.
Replied in accordance with your username to prevent this from becoming an Endless Back-and-Forth Internet Argument Thread.
Quote from the article in question:
Does that really sound like someone who is doing a biased, partial update?
The PUA’s opinion on women was, nonetheless, not discounted for the typical mind fallacy. (Maybe the idea is that typical mind fallacy doesn’t work across genders or something, which would be rather interesting hypothesis, but, alas, unsupported)
Write higher quality, or make 2 sections, 1 good, 1 random. Write for general audience, i.e. no awful LW jargon and LW terminology misuse (‘rational’ actually means something, and so does ‘bayesian’). Cross post here. Come on, you said before, in calibrate your self assessments , that you have relatively low opinion on yourself.
it’s Eliezer’s former blog, hurr, durr, it’s people who didn’t cringe too hard on stuff like http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ . He got those readers how? He split off from Hansen. I do have high opinion of you overall. Much higher than I have of EY.
Downvoted.
I hope I would downvote any comment containing the judgement-words “nonsense”, “sociopathic” and “trash” (referring to a subset of the LW readership) regardless of the position being advocated. The book Non-violent Communication advises making observations and expressing feelings, but avoiding rendering judgments. A “judgement” can be defined as a phrase or statement that can be expected to diminish the status or moral standing of a person or group.
Parenthetically, it has been proposed that one of the ways online forums unravel over time is that a few people who like making strong judgments show up and get into long conversations with each other, which tends to discourage participants for whom the strong judgments distract from their reasons for participating.
The judgements are very often instrumentally useful. Also, I do happen to see this subset of the users on internet every bit as negatively as I said, and so do many other people who usually don’t tell anything about it, and in so much as I do not think that seeing it more positively would be instrumentally useful, it’d not be quite honest to just see it this way and never say.
edit: also, a backgrounder. I am a game developer. We see various misogynistic crap every damn day (if looking into any online communication system). Also, as on the PUA, they see people (women) as objects, seek sexual promiscuity and shallow relationships, are deliberately manipulative, etc, etc. scoring more than enough points on a sociopathy traits list for a diagnosis (really only not scoring the positive points like charisma). What they think about women being taken as likely true is clearly a very poor example of evidence for a post about typical mind fallacy.
For interest of the discussion, here is the article in question
It actually is a perfect example of how LW is interested in science:
There is the fact that some people have no mental imagery, but live totally normal lives. That’s amazing! They’re more different than you usually imagine scifi aliens to be! And yet there is no obvious difference. It is awesome. How does that even work? Do they have mental imagery somewhere inside but no reflection on it? Etc, etc etc.
And the first thing that was done with this awesome fact here, was ‘update’ in the direction of trusting more the PUA community’s opinion on women, rather than women themselves, and that was done by author. That’s not even a sufficiently complete update, because the PUA community—especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it—is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.
This, cousin_it, is the case example why you shouldn’t be writing good work for LW. Some time back you were on verge of something cool—perhaps even proving that defining the real world ‘utility’ is incredibly computationally expensive for UDT. Instead, well, yeah, there’s the local ‘consensus’ on the AI behaviour and you explore for the potential confirmations of it.
A classic Arson, Murder, and Jaywalking right there.
I don’t know, given the harm bad data collection can do, I’m not sure being a clinical sociopath is much worse.
What ever data on physiology nazis collected correctly, we are relying on today. Even when very bad guys collect data properly the data is usable. When it’s on-line bragging by people fascinated with ‘negs’… not so much. It is a required condition that data is badly collected; the guys trying to be sociopaths does not suffice.
You seem to be saying: “you were close to realizing this problem was unsolvable, but instead you decided to spend your time exploring possible solutions.”
Generally, you seem to be continually frustrated about something to do with wireheading, but you’ve never really made your position clear, and I can’t tell where it is coming from. Yes, it is easy to build systems which tear themselves to pieces, literally or internally. Do you have any more substantive observation? We see a path to building systems which have values over the real world. It is full of difficulties, but the wireheading problems seem understood and approachable / resolved. Can you clarify what you are talking about, in the context of UDT?
The path he sees has values over internal model, but internal model is perfect AND it is faster than the real world, which stretches it a fair lot if you ask me. It’s not really a path, he’s simply using “sufficiently advanced model is indistinguishable from the real thing”. And we still can’t define what paperclips are if we don’t know the exact model that will be used, as the definition is only meaningful in context of a model.
The objection I have is that it is a: unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases), b: very difficult or impossible to define values over the real world, and c: values over real world are necessary for the doomsday scenario. If this can be narrowed down, then there’s precisely the bit of AI architecture that has to be avoided.
We humans are messy creatures. It is very plausible (in light of potential irreducibility of ‘values over real world’) that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies (when the model-prediction of the senses does not match the senses), resulting in learned preference not to lose correspondence between model and world, in place of straightforward “I value real paperclips therefore I value having a good model of the world” which looks suspiciously simple and poorly matches the observations (no matter how much you tell yourself you value real paperclips, you may procrastinate).
edit: and if I don’t make my position clear, it looks so because I am opposed to fuzzy ill defined woo where the distinction between models and worlds is poorly defined and the intelligence is a monolithic blob. It’s hard to define an objection to an ill defined idea which always off-shoots some anthropomorphic idea (e.g. wireheading gets replaced with real world goal to have a physical wire in a physical head that is to be kept alive with the wire).
Generally correct; we learn to value good models, because they are more useful than bad models. We want rewards, therefore we want to have good models, therefore we are interested in the world out there. (For a reductionist, there must be a mechanism explaining why and how we care about the world.)
Technically, sometimes the most correct model is not the most rewarded model. For example it may be better to believe a lie and be socially rewarded by members of my tribe who share the belief, than to have a true belief that gets me killed by them. There may be other situations, not necessarily social, where the perfect knowledge is out of reach, and a better approximation may be in the “valley of bad rationality”.
In other words, make an AI that only cares about what is inside the box, and it will not try to get out of the box.
That assumes that you will feed the AI all the necessary data, and verify that the data is correct and complete, because the AI will be just as happy with any kind of data. If you give an incorrect information to AI, the AI will not care about it, because it has no definition of “incorrect”; even in situations where AI is smarter than you and could have noticed an error that you didn’t notice. In other words, you are responsible for giving AI the correct model, and the AI will not help you with this, because AI does not care about correctness of the model.
You put it backwards.… making AI that cares about truly real stuff as the prime drive is likely impossible and certainly we don’t know how to do that nor need to. edit: i.e. You don’t have to sit and work and work and work and find how to make some positronic mind not care about the real world. You get this by simply omitting some mission-impossible work. Specifying what you want, in some form, is unavoidable.
Regarding verification, you can have the AI search for code that predicts the input data the best, and then if you are falsifying the data the code will include a model of your falsifications.
This is a really good point …
… which utterly fails to establish the claim that you attempt to use it for.
Context, man, context. cousin_it’s misgivings are about the low local standards. This article is precisely a good example of such low local standards—and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.
edit: also on the ‘good point’: this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It’s just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.