LOL, any leads on which company this is?
These guys are explicitly looking for LWers, and have an implicit endorsement from Eliezer—applying won’t be a waste of time.
LOL, any leads on which company this is?
These guys are explicitly looking for LWers, and have an implicit endorsement from Eliezer—applying won’t be a waste of time.
I’ll also say that censorship is a “hot button” issue for me, to the point that I’m not sure I want to continue helping SIAI. They went from nerdy-but-fun-to-talk-to/help to scary-cult-like-weirdos as soon as I read the article, and thought about what EYs reaction, and Roko’s removal meant.
I’m seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.
(Deleting this post, or the one I’m replying to, would count),
Hey Emile,
Please check out my other comments on this thread before replying, as it sounds like my reasoning isn’t fully clear to you.
Re: policing an online community I agree that there a lot of options to consider about how LW should be run, and that if people don’t like EY deleting their posts they’re free to try and set up their own LW in parallel. I don’t think it would be a good thing, or something we should encourage, but I agree it’s an option.
I also agree that some policing can help prevent a negative community from developing—that’s one reason I was glad to see that LW went with the reddit platform. It’s great at policing. I think it’s a big part of what makes LW is so successful.
That said, I also think that users should try other options rather than simply giving up on LW if they don’t like what’s going on. That’s what I’m doing here.
Re: 0.0001% You didn’t misunderstand me about the whole post deletion thing. To my mind 0.0001% isn’t that much compared to what the post deletion means about the future of LW. All this cloak-and-dagger silliness hurts the community. I’m doing my part to avoid further damage.
No one is going to delete it (I think? :p), so it doesn’t really matter either way.
-wfg
Fair enough, go check out this article (and the wikipedia article on MAD) and see if it doesn’t make a bit more sense.
I think the fact that he takes decision theory seriously is why he won’t delete it.
Ahh, okay good. LW & EY are awesome—as I mentioned in the rest of this thread, I don’t want to change any more than the smallest bit necessary to avoid future censorship.
-wfg
Fair point. Have you read the whole thread especially the Wei Dai bit?
It could be I’m wrong. It could also be that EY (who is also human) will be wrong if he decides to censor another LW post/comment because my commitment wasn’t in place.
Unless you’re offering a solution, I’d rather put my money on the table than stick to FUDing in the corner :p
These are really interesting points. Just in case you haven’t seen the developments on the thread, check out the whole thing here.
I’m not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.
I’m also pretty sure it’s irrational to ignore such things when making decisions. Perhaps not in a game theory sense, but absolutely in the practical life-theory sense.
As an example, our entire legal system is based on these sorts of credible threats.
If EY feels differently I’m not sure what to say except that I think he’s being foolish. I see the game theory he’s pretending exempts him from considering others reactions to his actions, I just don’t think it’s rational to completely ignore new causal information.
But like I said earlier, I’m not saying he has to do anything, I’m just making sure we all know that an existential risk reduction of 0.0001% via LW censorship won’t actually be a reduction of 0.0001%.
(and though you deleted the relevant part, I’d also be down to discuss what a sane moderation system should be like.)
That one also has negative connotation, but it’s your thinking to bias as you please :p
To put that bit about the legal system more forcefully:
If EY really doesn’t include these sorts of things in his thinking (he disregards US laws for reasons of game theory?), we have much bigger things to worry about right now than 0.0001% censorship.
Fair enough, so you’re saying he only responds to credible threats from people who don’t consider if he’ll respond to credible threats?
Threats and offers look identical to me after thinking about this some more—try swapping them out of a couple sentences.
They’re both simply telling someone that you’ll do something based on what they do.
Am I missing something?
(Please don’t vote unless you’ve read the whole thread found here)
Fair enough, we need to figure out a better way to navigate to the relevant part of “open thread” posts. The load comments above link doesn’t load comments below what’s above :-/
Usability, speaking the truth, and avoiding redundant comments are much more important than votes to me, if I could type it again i’d go with: please don’t reply unless you’ve read the whole thread.
Hey Jim,
It sounds like my post rubbed you the wrong way, that wasn’t my intention.
I do understand your math (world pop / a mil), did you understand mine?
Providing a credible threat reduces existential risk and saves lives… significantly more than the 6700 you cite.
Check out this article and the wikipedia article on MAD, then reread the post you’re replying to and see if it makes more sense. The Wei Dai exchange might also help shed some light. If you ask questions here I’ll do my best to walk you through anything you get stuck on.
I don’t feel comfortable talking in too much detail here about my list. If anyone knows a good way for me to reveal one or two methods safely I’m willing.. but it’s not like they’re not rocket science or anything.
-wfg
(edit: fixed awkward wording in last paragraph)
Right. Thanks for this post. People keep responding with knee-jerk reactions to the implementation rather than thought out ones to the idea :-/
Not that I can blame them, this seems to be an emotional topic for all of us.
Just to be clear, I didn’t learn about this via the Roko link (nor did I say in PM that I did), I used the Roko link after finding out about it on messages higher up in this thread (July 2010 open thread pt 2). Without the link I would have used the LW search bar.
No biggie, I wouldn’t even mention it except that it seems to be your justification for voting weirdness.
Good questions, these were really fun to think about / write up :)
First off let’s kill a background assumption that’s been messing up this discussion: that EY/SIAI/anyone needs a known policy toward credible threats.
It seems to me that stated policies to credible threats are irrational unless a large number of the people you encounter will change their behavior based on those policies. To put it simply: policies are posturing.
If an AI credibly threatened to destroy the world unless EY became a vegetarian for the rest of the day, and he was already driving to a BBQ, is eating meat the only rational thing for him to do? (It sure would prevent future credible threats!)
If EY planned on parking in what looked like an empty space near the entrance to his local supermarket, only to discover that on closer inspection it was a handicapped-only parking space (with a tow truck only 20 feet away), is getting his car towed the only rational thing to do? (If he didn’t an AI might find out his policy isn’t iron clad!)
This is ridiculous. It’s posturing. It’s clearly not optimal.
In answer to your question: Do the thing that’s actually best. The answer might be to give you 2x the resources. It depends on the situation: what SIAI/EY knows about you, about the likely effect of cooperating with you or not, and about the cost vs benefits of cooperating with you.
Maybe there’s a good chance that knowing you’ll get more resources makes you impatient for SIAI to make a FAI, causing you to donate more. Who knows. Depends on the situation.
(If the above doesn’t work when an AI is involved, how about EY makes a policy that only applies to AIs?)
In answer to your second paragraph I could withdraw my threat, but that would lessen my posturing power for future credible threats.
(har har...)
The real reason is I’m worried about what happens while I’m trying to convince him.
I’d love to discuss what sort of moderation is correct for a community like less wrong—it sounds amazing. Let’s do it.
But no way I’m taking the risk of undoing my fix until I’m sure EY’s (and LW’s) bugs are gone.
“Optimizing interaction techniques for social enjoyment” is too long and abstract—it signals that the group doesn’t understand what it’s setting out to do.
Perhaps “Social Optimizer”? It’s understandable and gets the overly nerdy angle w/o being confusing.