I think you need to make a clearer distinction between a hypothetical perfect bayesian reasoner, who would know all possible hypotheses from the beginning and only narrow them down through observation, and the things humans do to try to approximate that, which will sometimes involve going back and changing the prior when a new hypothesis has been thought of.
clone of saturn
I did not call anyone a murderer in this thread. I did ask about it a month ago but replies convinced me it wasn’t appropriate. Although I see how even using it as an analogy could cause confusion. I’ll edit my comment.
I’m not talking to AI researchers. I don’t think there’s any world where every AI researcher can talked into quitting, there will always be people lining up around the block to screw over humanity for 8+-figure amounts of money. This is why I think they have to be stopped. Talking an individual researcher into quitting may be good for that individual’s conscience, but only lengthens timelines by the difference between that person’s competence and their replacement, which probably isn’t enough to be worth much.
I don’t see why movement-building would be more effective if it’s highly palatable to contributors at labs, it seems more like the opposite is true, but feel free to explain.
This is drawing a false equivalence between AI risk and random crackpot beliefs, which is dishonest. You won’t find accomplished scientists saying vaccines against covid, GMOs, or climate change have a substantial likelihood of killing everyone. Also it’s not relevant whether someone who’s going to kill me is evil and I’m not talking about that, I want them to be stopped from killing me whether they’re evil or not, and whether they can be talked out of it or not.
Why are you asking me to put my own desire not to be killed along with everyone I care about at the bottom of my priorities list, below protecting the feelings of AI researchers? That is an insane request and I’m obviously not willing to do it.
I was vague because I don’t think it’s actually prudent to threaten anyone at this time, but I do think it’s important to defend the possibility of talking about it. Of course I know it’s possible to be counterproductively aggressive, but I guess I’m getting a little edgy because my sense is that almost all people reading LW err on the side of being way too conflict-avoidant. Being ready and willing to fight (including nonviolent resistance) can have advantages even if no fighting actually occurs, but it requires, among other things, being able to identify enemies.
Totally agree.
Pushing toward ASI isn’t actually a common thing, only a tiny fraction of people are doing that. I think it’s unlikely, but if my words cause someone to quit pushing toward ASI but feel morally licensed to do other bad things, I think I’d still consider that a win, since pushing toward ASI is one of the worst things a person can do. I think there are people out there who at-least-somewhat-terminally value murder, but are prevented from committing murder by moral disapprobation and threat of punishment, so it’s important not to push those things outside the Overton window for fear of causing bad vibes.
I used the passive voice because identifying who does the stopping isn’t directly relevant to the topic of whether it’s good or bad to promote enmity.
People who are attempting to cause serious harm need to be stopped. If
someone is currently attempting murdera mad scientist is performing life-threatening experiments on people without their consent, it’s not reasonable to look for mutually beneficial arrangements with them, they need to be restrained and put in prison. I’m not open to peaceful coexistence with people who insist on building something that will likely destroy the human race. No compromise is possible, they simply can’t be allowed to build it. If they won’t stop voluntarily, they absolutely are enemies.
I’m kind of surprised that you had two different people without basic background knowledge of how gas heaters work (i.e. that exhaust goes out the exhaust pipe, and exhaust can be poisonous). Maybe you could all benefit from reading The Way Things Work by David Macaulay.
Sorry, I didn’t intend to fearmonger. I agree with pretty much everything else in this comment. European social democracies seem pretty nice, and communism isn’t necessarily always the worst thing in the world (although I usually avoid saying that on LW because it gets me downvoted). However, communism didn’t end up working out anything like the way early communists envisioned, and countries that ended up communist or social democratic had to go through specific historical events that ended up making them that way. Right now billionaires seem unwilling to make concessions because they think under the current circumstances they will win in a showdown with the public, and I don’t really see why they’re wrong. Why do you think they’re wrong?
The way I see it, society is basically a big ultimatum game: the rich get to steer it in whatever way they choose, and the masses can either accept what they do or smash everything and go back to the “stone age”. So that sets the terms of the ongoing negotiation. It’s hard to have something like a wealth ceiling because it’s hard for millions of people to commit to being okay with someone having $100,000,000 but blowing up the world if someone has $100,000,001.
No, it’s not possible for it to be negative. You’re not allowed to murder people even if you save an equal or greater number. If you invented a machine that had a 49% chance of killing me and a 51% chance of making me immortal, and you pointed it at me without permission, you would be committing a heinous crime and I’d be perfectly justified in self-defense. AI CEOs are doing the same thing at a much larger scale.
Along similar lines, should we consider Sam Altman, Dario Amodei, etc. to be more evil than Hitler, in terms of the expected number of people they will murder?
There’s no objective answer to whether acausal extortion works or not, it’s a choice you make. You can choose to act on thoughts about acausal extortion and thereby create the incentive to do acausal extortion, or not. I would recommend not doing that.
It works for getting the typical LMArena user to click the like button, but it’s not clear that it works for persuasion or anything else. Personally I find the style very offputting and usually stop reading when I notice it.
This is an important topic, but this post seems like it was written by AI.
I don’t know how people get the idea that this type of reckless behavior is anything remotely like what Gwern’s essay recommends.
I would go further, I don’t think liberal democracy even makes sense in a world with ASI in it. An ASI would find it easy to manipulate public opinion to achieve whatever political outcomes it wanted, and to manipulate people’s decisionmaking to confiscate any property it wanted, even within the constraint of not breaking any laws. So the substantive importance of liberal democracy in this scenario is basically nil.