Excellent insight. Downvoted.
Oligopsony
So, let’s say some bros of mine and I have some hand-signals for, you know, bro stuff. And one of the signals means, “Oh, shit. Here comes that girl! You know. That girl. She’s coming.” That signal has a particular context. Eventually, one of my bros gets tired of sloppy use of the signal, and sets about laying out specifically what situations make a girl that girl. If I used the signal in a close-but-not-quite context, he’d handle it and then pull me aside and say, “I know she and I had that thing that one time, but we never… well, it wasn’t quite THAT. You know? So that signal, it freaked me out, because I thought it had to be someone else. Make sure you’re using it properly, okay?” And I’d be like, “Bro. Got it.”
Another friend of mine, he recognizes the sorts of situations we use the signal in have a common thread, so he begins using the hand signal for other situations, any situation that has the potential for both danger and excitement. So if someone invites us to this real sketchy bar, he’ll give me the signal—“This could be bad. But what if it’s not?” And I’d respond, “I see what you did there.”
Maybe you see where this is going. We’re hanging out one day, and some guy suggests we crash some party. Bro #2 signals, and bro #1 freaks out, looking around. And then he’s like, “OH FUCK I HAVE TO CALL HER.” And #2 says, “No, dude, there’s no one coming. I just meant, this is like one of those situations, you know?” And they’re pissed at each other because they’re using the same signal to mean different things. I’m not mad, because I generally know what they each mean, but I have more context than they do.
The same thing probably happens with analytics and Continentals.
Individually very minor, petty reasons, befitting a very minor, petty action:
1) It bored me.
2) Your research skills are very impressive and I’d rather them be directed towards CEV or the like.
3) Ugh field concerning this site and sex/dating questions.
4) There’s no puzzle to it; you’re not illustrating any broader methodological point or coming to any new conclusions, just acting as a clearinghouse for dating advice.
5) “A Rational Approach to...”
On precision in aesthetics, metaethics:
RS: Butt-Head, I have a question for you. I noticed that you often say, “I like stuff that’s cool.” But isn’t that circular logic? I mean, what is the definition of “cool,” other than an adjective denoting something the speaker likes?
BH: Huh-huh. Uh, did you, like, go to college?
RS: You don’t have to go to college to know the definition of “redundant.” What I’m saying is that essentially what you’re saying is “I like stuff that I like.”
B: Yeah. Huh-huh. Me, too.
BH: Also, I don’t like stuff that sucks, either.
RS: But nobody likes stuff that sucks!
BH: Then why does so much stuff suck?
B: Yeah. College boy! Huh-huh, huh-huh.
-Rolling Stone, Interview with Beavis and Butt-Head
I think you and Alicorn may be talking past each other somewhat.
Throughout my life, it seems that what I morally value has varied more than what rightness feels like—just as it seems that what I consider status-raising has changed more than what rising in status feels like, and what I find physically pleasurable has changed more than what physical pleasures feel like. It’s possible that the things my whole person is optimizing for have not changed at all, that my subjective feelings are a direct reflection of this, and that my evaluation of a change of content is merely a change in my causal model of the production of the desiderata (I thought voting for Smith would lower unemployment, but now I think voting for Jones would, etc.) But it seems more plausible to me that
1) the whole me is optimizing for various things, and these things change over time,
2) and that the conscious me is getting information inputs which it can group together by family resemblance, and which can reinforce or disincentivize its behavior.Imagine a ship which is governed by an anarchic assembly beneath board and captained by an employee of theirs whom they motivate through in-kind bonuses. So the assembly at one moment might be looking for buried treasure, which they think is in such-and-such a place, and so they send her baskets of fresh apples when she’s steering in that direction and baskets of stinky rotten apples when she’s steering in the wrong. For other goals (refueling, not crashing into reefs) they send her excellent or tedious movies and gorgeous or ugly cabana boys. The captain doesn’t even have direct access to what the apples or whatever are motivating her to do; although she can piece it together. She might even start thinking of apples as irreducibly connected to treasure. But if the assembly decided that they wanted to look for ports of call instead of treasure, I don’t see why in principle they couldn’t start sending her apples in order to do so. And if they did, I think her first response would be, if she was verbally asked, that the treasure—or whatever the dubloons constituting the treasure ultimately represent in terms of the desiderata of the assembly—had moved to the ports of call. This might be a correct inference—perhaps the assembly wants the treasure for money and now they think that comes better from heading to ports of call—but it hardly seems to be a necessarily correct one.
If I met two vampires, and one said his desire to drink blood was mediated through hunger (and that he no longer felt hunger for food, or lust) and another said her desire to drink blood was mediated through lust (and that she no longer felt lust for sex, or hunger) then I do think—presuming they were both once human, experiencing lust and hunger like me—they’ve told me something that allows me to distinguish their experiences from one another, even though they both desire blood and not food or sex.
They may or may not be able to explain to what it is like to be a bat.
Unless I’m inserting a further layer of misunderstanding your position seems to be curiously disjunctivist. I or you or Alicorn or all of us may be making bad inferences in taking “feels like” to mean “reminds one of the sort of experience that brings to mind...” (“I feel like I got mauled by a bear,” says someone not just and maybe never mauled by a bear) or “constituting an experience of” (“what an algorithm feels like from the inside”) when the other is intended. This seems to be a pretty easy elision to make—consider all the philosophers who say things like “well, it feels like we have libertarian free will...”
1) If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
2) If Tegmark possible worlds have measure inverse their algorithmic complexity, and causal universes are much more easily computable than logical ones, should we not then find it not surprising that we are in an (apparently) causal universe even if the UE includes logical ones?
Do they think, “oh, an unfunny show for little girls” or “oh, an unfunny show for nerdy men?” The latter is both consistent with everything I have heard about the pony show and with a cartoon about Cthulhu jokes.
And since it seems more likely than not that someone will ask: No I’ve never tried using such simulationist sympathetic magic myself, and since I still question the basic assumptions behind such mass-simulation in the first place, I have no intention of trying it in the future, either.
I see what you did there.
I have frequently stopped responding to people because I failed to respond immediately, and then forgot that the conversation existed. I have no idea how common this is.
1) Sample size.
2) Anti-weird biases are likely to be stronger in person.
3) People who find cryonics attractive are more likely to learn more about it and to seek out discussions of it. In a small group environment, people instead just comment on whatever is being discussed. So most LW types may be uninterested in cryonics, either because they find it facially implausible or immortality unappealing, even while most LW types who actively discuss it online find it plausible and appealing.
I suspect this was written and is being upvoted in very different senses.
Getting offended is a way of discouraging antisocial behavior, perhaps even the primary way. Because this is a public good, it is probably underprovided. (And yet you go on to recommend against it! Frankly, I’m shocked.)
Getting offended for one’s own sake, alternatively, is probably a Pavlovian learned behavior because criticism feels bad. Being able to distinguish between different causes of offense seems like a useful skill, due to the costs of being offended that you point out.
More generally, one can better callibrate one’s offense-giving by training to be offended at antisocial actions iff your offense actually has the deterrent effect. There is little utility in being offended by someone who is not in front of your face. There is also little utility in disapproving of people do not care for your approval. Inasmuch as people care about being disapproved of even when they are not looking, however, you may wish to cultivate offense even then.
- How to Not Get Offended by 23 Mar 2013 23:12 UTC; 16 points) (
- 12 May 2013 2:36 UTC; 3 points) 's comment on [SEQ RERUN] Of Gender and Rationality by (
As the Theorem treats them, voters are already utility-maximizing agents who have a clear preference set which they act on in rational ways. The question: how to aggregate these?
It turns out that if you want certain superficially reasonable things out of a voting process from such agents—nothing gets chosen at random, it doesn’t matter how you cut up choices or whatever, &c. - you’re in for disappointment. There isn’t actually a way to have a group that is itself rationally agentic in the precise way the Theorem postulates.
One bullet you could bite is having a dictator. Then none of the inconsistencies arise from having all these extra preference sets lying around because there’s only one and it’s perfectly coherent. This is very easily comparable to reducing all of your own preferences into a single coherent utility function.
I daresay this is the least terrible discussion of gender we’ve ever had. Good job, LW!
With respect to why some viscerally reject the idea, I think many see charity as a sort of morally repugnant paternalism that demeans its supposed beneficiaries. (I can sympathize with this, although it seems like a rather less pressing consideration than famine and plague.)
You might actually be able to cut ideologies up—or at least the instinctive attitudes that tend to precede them—according to how comfortable they are with charity and what they see it as encompassing: liberals think charity is great; socialists find charity uncomfortable and think it would be best if the poor took rather than passively received; libertarians either also find charity uncomfortable but extend that feeling to any system that socialists might hope to establish, or think charity is great but that the social democratic stuff liberals like isn’t charity.
It might also be possible to view this unease as stemming from formally representing charity as purchasing status. I give you some money, I feel great, you feel crummy (but eat.) It’s a bit like prostitution: one doesn’t have to deny that both parties are on net better off from any given transaction to hold that something exploitative is going on. For socialists and some libertarians, a world sustained by charity (whatever that is) is intolerable and people should instead take what is theirs (whatever that is.) Others think charity is great because—to put it, well, very uncharitably—it lets them be the johns. (One of Aristotle’s arguments against socialism is that if we owned all things in common, he wouldn’t be able to grow in generosity by lending slaves to his friends.)
I would guess that it is much easier for people to recategorize what falls into the “charity” bucket than to flip their valence on the bucket itself.
I’m new to all this singularity stuff—and as an anecdotal data point, I’ll say a lot of it does make my kook bells go off—but with an existential threat like uFAI, what does the awareness of the layperson count for? With global warming, even if most of any real solution involves the redesign of cities and development of more efficient energy sources, individuals can take some responsibility for their personal energy consumption or how they vote. uFAI is a problem to be solved by a clique of computer and cognitive scientists. Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question—again, I’m new to this.)
There is, of course, the question of fundraising. (“This problem is too complicated for you to help with directly, but you can give us money...” sets off further alarm bells.) But from that perspective someone who thinks you’re nuts is no worse than someone who hasn’t heard of you. You can ramp up the variance of people’s opinions and come out better financially.
“”
If the AI was friendly, this is what I would expect it to do, and so (of the things my puny human brain can think of) the message that would most give me pause.
I’ve gotten so used to wacky speculation on this site that I thought this thread would be about, like, what if the quantum vacuum… was actually a robot???
“Tell me, Eben: how is’t, d’you think, that the planets are moved in their courses?”
“Why, said Ebenezer, “’tis that the cosmos is filled with little particles moving in vortices, each of which centers on a star; and ’tis the subtle push and pull of these particles in our solar vortex that slides the planets along their orbs – is’t not?”
“So saith Descartes,” Burlingame smiled. “And d’you haply recall what is the nature of light?”
“If I have’t right,” replied Ebenezer, “’tis an aspect of the vortices – of the press of inward and outward forces in ’em. The celestial fire is sent through space from the vortices by this pressure, which imparts a transitional motion to little light globules – ”
“Which Renatus kindly hatched for that occasion,” Burlingame interrupted. “And what’s more he allows his globules both a rectilinear and a rotatary motion. If only the first occurs when the globules smite our retinae, we see white light; if both, we see color. And if this were not magical enough – mirabile dictu! – when the rotatory motion surpasseth the rectilinear, we see blue; when the reverse, we see red; and when the twain are equal, we see yellow. What fantastical drivel!”
“You mean ’tis not the truth? I must say, Henry, it sounds reasonable to me. In sooth, there is a seed of poetry in it; it hath an elegance.”
“Aye, it hath every virtue and but one small defect, which is, that the universe doth not operate in that wise.”
-John Barth, the Sot-Weed Factor
Some possibilities on dorky LW topics (as opposed to the topics I assume Vladimir et al. are referring to):
Not only are anti-natalist arguments correct, they are correct in such a way that we should be attempting to maximize x-risks.
Wireheading is necessary and sufficient for the fulfillment of true human CEV; people only claim to care about other values for signalling purposes.
A very strong form of error theory is correct; what people actually care about is qualia, even though there is no such thing. It doesn’t all add up to normality; just as bad metaphysics may lead people to think there’s a relevant difference between praying to God and attempting to summon demons, bad metaphysics makes people think there’s a relevant difference between donating a million dollars to Against Malaria Foundation and kidnapping and torturing a small child.
It would be very fun to have a thread where we attempted to come up with seductive, harmful ideas, and the chance of actually happening upon a very infectious and very harmful one would be very low.