I had watched the whole thing and came away with a very different impression. From where I’m standing, Connor is just correct about everything he said, full stop. Beff made a few interesting points but was mostly incoherent, equivocating, and/or evasive. Connor tried very hard for hours to go for his cruxes rather than get lost in the weeds, but Beff wouldn’t let him. Maybe Connor could have called him on it more skillfully, but I don’t think I could have done any better. Maybe he’ll try a different tack if there’s a next time. The moderator really should have intervened.
At some point they start building their respective cases—what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff’s side—what if there are dangerous aliens?
For the love of god, please talk about the actual topic.
This is the actual topic. It’s the Black Marble thought experiment by Bostrom, and the crux of the whole disagreement! Later on Connor called it rolling death on the dice. Non-ergodicity. Beff’s whole position seems to be to redefine “the good” to be “acceleration of growth”, but Connor wants to add “not when it kills you!”
About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone’s views are the extremist parodies of themselves. Embarrassing tbh.
Again, Connor is simply correct here. This is not a novel argument. It’s Goodhart’s Law. You get what you optimize, even if it’s only a proxy for what you want. The tails come apart. You can overshoot and get your proxy rather than your target. Remember, Beff’s position: “growth = good”, which is obviously (to me, Connor, and Eliezer) false. Connor tried very hard to lead Beff to see why, but Beff was more interested in muddying the waters than achieving clarity or finding cruxes.
He also points out, many many times, that “is” != “ought”, which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell.
Again, Connor is simply correct. This isn’t about virtue signaling at all; that completely misses the point. Beff is equivocating. Connor is trying to point out the distinct definitions required to separate the concepts so he can move the argument forward to the next step. Beff just wasn’t listening.
“Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn’t making a point”
Immediately followed by “If an AI could design an F16, should it be open-sourced?”
Is there something wrong with trying to understand the other position before making a point? No, and Beff should have tried harder to understand the other position. Kudos to Connor for trying. This is the Black Marble again (maybe a gray one in this case). Beff seems to have the naiive position that open source is an unmitigated good, which is obviously (to me and Connor) false, because infohazards. I don’t think F16s were a great example, but it could have been any number of other things.
So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
Totally unfair characterization. I think this is Connor simply not understanding Beff’s position, rather than Connor doing anything underhanded. The question was not simply rhetorical, and the answer was important for updating Connor’s understanding (of Beff’s position). From Connor’s point of view, an intelligence explosion eats most of the future light cone anyway, so it’s not that different from a false vacuum collapse: everybody dies, and the future has no value. There are some philosophies that actually bite the bullet to remain consistent in the limit and actually want all humans to die. (Nick Land came up.) Connor thinks Beff’s might be that on reflection, but it’s not for the reason Connor thought here.
It’s in line with what seems like Connor’s debate strategy—make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.
Thanks for that virtue signal, very valuable to the conversation.
OK, maybe that’s a signal (it’s certainly a quip), but the point is valid, stands, and Connor is correct. I am sympathetic to the libertarian philosophy, but the naiive application is incomplete and cannot stand on its own.
After about 2 hours and 40 minutes of the “debate”, it seems we finally got to the point!
Finally? Connor has been talking about this the whole time. Black marble!
If I were to respond to this myself, I’d say—at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely.
Yep. That was yesterday. Connor would be interested in talking all about why he thinks that and (as evidenced by the next quote) wants to know Beff’s criteria for when that point is, so Connor can move on and either explain why that point has already passed, or point out that Beff doesn’t have any criteria and will just go ahead and draw the black marble without even trying to prepare for it. (Which means everybody dies.)
To which Connor has another one of the worst debate arguments ever:
“So when is the right time? When do we know?”
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you’re prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor’s position (and mine and Bostrom’s).
I don’t know when is the right time to stop overpopulation on Mars.
That is a very old, very bad argument. If NASA discovered a comet big and fast enough to cause a mass extinction event that they estimated to have a 10% chance of colliding with Earth in 100 years, we shouldn’t start worrying about it until it’s about to hit us. Right? Or from the glass-half-full perspective, we’ve got a 90% chance of surviving anyway, so let’s just forget about the whole thing. Right? Do you understand how absurd that sounds?
But Connor (and Eliezer and I (and Hinton)) don’t think we have 100 years. We think it’s probably decades or less, maybe much less. And Connor (and Eliezer and I) don’t think we have a 90% chance of surviving by default. Quite the reverse, or even worse.
In response, Connor resorts to yelling that “You don’t have a plan!”
No shit. Not only that, but e/acc seems to be trying very hard to make the problem worse, by giving us even less time to prepare and sabotaging efforts to buy more.
This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn’t listening though.
This was largely a display of tribal posturing via two people talking past each other.
Maybe describes Beff. Connor tried. Could’ve been better, but we have to start somewhere. Maybe they’ll learn from their mistakes and try again.
Poor performance from both of them, but particularly Connor’s behavior is seriously embarrassing to the AI safety movement.
I was embarrassed by Connor’s headshot comment, which I thought was inappropriate. Thought experiments that could be interpreted as veiled death threats against one’s interlocutor are just plain rude. Could have been worded differently. I don’t think Connor actually meant it that way, and perfection is an unreasonable standard in a frustrating three-hour slog of a debate. But still bad form.
Besides that (which you didn’t even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. Should he have not gone for cruxes? Because that’s how progress gets made. Debaters can easily waste inordinate amounts of time on points that neither cares about (that don’t matter) because they happened to come up. Connor was laser focused on making some actual progress in the arguments, but Beff was being so damn evasive that he managed to waste a couple of hours anyway. It’s a shame, but this is so not on Connor. What do you even want from him?
For what it’s worth, I think you’re approaching this in good faith, which I appreciate. But I also think you’re approaching the whole thing from a very, uh, lesswrong.com-y perspective, quietly making assumptions and using concepts that are common here, but not anywhere else.
I won’t reply to every individual point, because there’s lots of them, so I’m choosing the (subjectively) most important ones.
This is the actual topic. It’s the Black Marble thought experiment by Bostrom,
No it’s not, and obviously so. The actual topic is AI safety. It’s not false vacuum, it’s not a black marble, or a marble of any color for that matter. Connor wasn’t talking about the topic, he was building up to the topic using an analogy, a more abstract model of the situation. Which might be fair enough, except you can’t just assert this model. I’m sure saying that AI is a black marble will be accepted as true around here, but it would obviously get pushback in that debate, so you shouldn’t sneak it past quietly.
Again, Connor is simply correct here. This is not a novel argument. It’s Goodhart’s Law.
As I’m pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let’s say your goal is stopping AI progress. If you’re consistent, that means you’d want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it’s so transparent and I’m disappointed that you don’t see it.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
Great! So state and defend and argue for this position, in this specific case of an unaligned superintelligence! Because the way he did it in a debate, was just by extrapolating whatever views Beff expressed, without care for what they actually are, and showing that when you push them to the extreme, they fall apart. Because obviously they do, because of Goodhart’s Law. But you can’t dismiss a specific philosophy via a rhethorical device that can dismiss any philosophy.
Finally? Connor has been talking about this the whole time. Black marble!
Again, I extremely strongly disagree, but I suspect that’s a mannerism common in rationalist circles, using additional layers of abstraction and pretending they don’t exist. Black marble isn’t the point of the debate. AI safety is. You could put forward the claim that “AI = black marble”. I would lean towards disagreeing, I suspect Beff would strongly disagree, and then there could be a debate about this proposition.
Instead, Connor implicitly assumed the conclusion, and then proceeded to argue the obvious next point that “If we assume that AI black marble will kill us all, then we should not build it”.
Duh. The point of contention isn’t that we should destroy the world. The point of contention is that AI won’t destroy the world.
Connor is correctly making a very legit point here.
He’s not making a point. He’s again assuming the conclusion. You happen to agree with the conclusion, so you don’t have a problem with it.
The conclusion he’s assuming is: “Due to the nature of AI, it will progress so quickly going forward that already at this point we need to slow down or stop, because we won’t have time to do that later.”
My contention with this would be “No, I think AI capabilities will keep growing progressively, and we’ll have plenty of time to stop when that becomes necessary.”
This is the part that would have to be discussed. Not assumed.
That is a very old, very bad argument.
Believe it or not, I actually agree. Sort of. I think it’s not good as an argument, because (for me) it’s not meant to be an argument. It’s meant to be an analogy. I think we shouldn’t worry about overpopulation on Mars because the world we live in will be so vastly different when that becomes an immediate concern. Similarly, I think we shouldn’t (overly) worry about superintelligent AGI killing us, because the state of AI technology will be so vastly different when that becomes an immediate concern.
And of course, whether or not the two situations are comparable would be up to debate. I just used this to state my own position, without going the full length to justify it.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn’t listening though.
I kinda agree here? But the problem is on both sides. Beff was awfully resistant to even innocuous rhethorical devices, which I’d understand if that started late in the debate, but… it took him like idk 10 minutes to even respond to the initial technology ban question.
At the same time Connor was awfully bad at leading the conversation in that direction. Let’s just say he took the scenic route with a debate partner who made it even more scenic.
Besides that (which you didn’t even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. [...] What do you even want from him?
Great question. Ideally, the debate would go something like this.
B: So my view is that we should accelerate blahblah free energy blah AI blah [note: I’m not actually that familiar with the philosophical context, thermodynamic gods and whatever else; it’s probably mostly bullshit and imo irrelevant]
C: Yea, so my position is if we build AI without blah and before blah, then we will all die.
B: But the risk of dying is low because of X and Y reasons.
C: It’s actually high because of Z, I don’t think X is valid because W.
And keep trying to understand at what point exactly they disagree. Clearly they both want humanity/life/something to proliferate in some capacity, so even establishing that common ground in the beginning would be valuable. They did sorta reach it towards the end, but at that point the whole debate was played out.
Overall, I’m highly disappointed that people seem to agree with you. My problem isn’t even whether Connor is right, it’s how he argued for his positions. Obviously people around here will mostly agree with him. This doesn’t mean that his atrocious performance in the debate will convince anyone else that AI safety is important. It’s just preaching to the choir.
As I’m pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let’s say your goal is stopping AI progress. If you’re consistent, that means you’d want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it’s so transparent and I’m disappointed that you don’t see it.
I see what you’re saying, and yes, fully general counterarguments are suspect, but that is totally not what Connor was doing. OK, sure, instrumental goals are not terminal values. Stopping AI progress is not a terminal value. It’s instrumental, and hopefully temporary. Bostrom himself has said that stopping progress on AI indefinitely would be a tragedy, even if he does see the need for it now. That’s why the argument can’t be turned on Connor.
The difference is, and this is critical, Beff’s stated position (as far as Connor or I can tell) is that acceleration of growth equals the Platonic Good. This is not instrumental for Beff; he’s claiming it’s the terminal value in his philosophy, i.e., the way you tell what “Good” is. See the difference? Connor thinks Beff hasn’t thought this through, and this would be inconsistent with Beff’s moral intuitions if pressed. That’s the Fisher-Price Nick Land comment. Nick bit the bullet and said all humans die is good, actually. Beff wouldn’t even look.
No it’s not, and obviously so. The actual topic is AI safety. It’s not false vacuum, it’s not a black marble, or a marble of any color for that matter.
It is, and Connor said so repeatedly throughout the conversation. AI safety is a subtopic, a special case, of Connor’s main thrust, albeit the most important one. (Machine transcript, emphasis mine.)
The world is not ergodic, actually. It’s actually a very non-ergodic you can die. [...] I’m wondering if you agree with this, forget [A]I for a moment that at some point not saying it’s [A]I just at some point we will develop technology that is so powerful that if you fuck it up, it blows up everybody.
The way I see things is, is that never mind. Like, I know AGI is the topic I talk about the most and whatever comes the most pressing one, but [A]I actually AGI is not the main thing I care about. The main thing I care about is technology in general, and of which AGI is just the most salient example in the current future. You know, 50 if I was born 50 years ago, I would care about nukes [...] And the thing I fundamentally care about is the stewardship of technology. [...] of course things can go bad. It’s like we’re[...] mimetically engineering, genetically engineering, super beings. Like, of course this is dangerous. Like, if we were genetically engineering super tigers, people would be like, hey, that seems maybe a bit, but let let’s talk about this
Beff starts talking before he could finish, so skipping ahead a bit:
The way I see things is, is that our civilization is just not able to handle powerful technology. I just don’t trust our institutions. Our leaders are, you know, distributed systems. Anything with, you know, hyper powerful technology at this point in time, this doesn’t mean we couldn’t get to systems that could handle this technology without catastrophic or at least vastly undesirable side effects. But I don’t think we’re there.
But I want to make clear again, just the point I’m trying to make here. Is that the point I’m trying to make here is, is that predictably, if you have a civilization that doesn’t even try, that just accelerates fast as possible, predictably guaranteed, you’re not going to make it. You’re definitely not going to make it. At some point, you will develop technology that is too powerful to handle if you just have the hands of random people, and if you do it as unsafe as possible, eventually an accident will happen. We almost nuked ourselves twice during the Cold War, where only a single person was between a nuke firing and it not happening. If the same thing happens with, say, superintelligence or some other extremely powerful technology which will happen in your scenario sooner or later. You know, maybe it goes well for 100 years, maybe it goes well for a thousand years, but eventually your civilization is just not going to make it.
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you’re prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor’s position (and mine and Bostrom’s).
So just to make this clear: a “black marble” is some kind of asymmetric technology. For example, a machine gun isn’t a black marble because for every gun that a person could buy or build themselves, large governments will have 100. A pandemic virus that with a high fatality rate after a lengthy delay and didn’t mutate to become less deadly* would be a black marble, because current technology makes it cheap and easy to build any string of RNA you want, while the hospital care to save one person is extremely labor and material intensive, and often fails. *(evolutionary forces want to make the virus shorter, removing it’s ability to kill after a delay, which is why this likely won’t work)
You feel confident that inside the total number of “marbles” between (1) right now and (2) humans develop off planet or interstellar colonies contains at least one black marble. And therefore if humans draw the marbles faster and faster, planning to leave the planet soon, they will pull a black one.
Ok. And then the counter argument would be that you’re probably wrong, because no black marbles have been drawn yet, and you would need to prove they exist before any action is taken about them? (and not to get sucked too far into the weeds, but most claims about a “superintelligence” are kinda like a fictional black marble that may simply not be that effective)
Beff’s whole love story to capitalism and thermodynamics to me seems like simply an argument that since the start of the industrial revolution, technology has been net good and no black marbles were drawn, therefore the right choice is to continue. And it’s a good argument without all the baggage, because it’s empirical. (and a fair counter would be how technology has only been ‘net good’ when various actions, mostly government, stopped it from only enriching the owners of coal mines while the miners lost their limbs and died from lung disease...)
I want someone who has any significant experience in highly adversarial debates where the point is to communicate to the audience why you think your interlocutor is not a good choice to ally with and has nothing to do with epistemics unless you can establish that social context. Connor failed to establish that social context in the presence of someone with high skill at destroying it. Beff won the debate, even though his arguments sucked. This does not make me agree with him.
But I don’t think beff would have accepted the debate if he didn’t expect to be able to win. I’m really frustrated with folks here for their blindness to how lopsided the debate was socio-emotionally.
What I’d look forward to is a debate with someone with significant experience establishing the epistemics frame, like, you know, an experienced professor. Eg, Bengio.
OK, that’s a fair enough ask. Do you have an alternative candidate in mind with approximately Connor’s position and said experience? If wishes were horses beggars could ride. Connor understands the arguments and the epistemics, to the point that (from my perspective) he’s doing an even better job at live debates than Yudkowsky. (You might not consider that a high bar.) The only way he gets more debate skill is more practice, or perhaps much more specific guidance than you have given. Maybe doesn’t have to be public, but would Beff have agreed otherwise? And who would critique them?
I’m really frustrated with folks here for their blindness to how lopsided the debate was socio-emotionally.
Not obviously true to me, although admitedly bad if so. I accept that my perspective might be biased here, as I went in already somewhat familiar with Connor’s arguments. But I can only call what I’m capable of seeing. What’s your evidence? Anything legible to me? Beff’s fan club in the YouTube comments (or on Twitter X)? That’s not a good indicator of how a neutral party would see it, although I can see the comments themselves maybe skewing their perspective.
I do not have an alternate candidate in mind besides Bengio, and I don’t know if we should expect to be able to get him to have a debate like this. If Connor were to ruthlessly drill this in debates with people who are capable of acting on Beff’s level of consistent bad faith but are actually friendly, that might do the trick, not sure. But he has to be open to feedback that I currently model him as not being: things like “that argument structure will not work”.
(It might be more effective to have Bengio debate Connor in a format like this, actually.)
The marginal fan club member is who I’m concerned about, so yeah, the edge of beff’s fan club is my threat model. Neutral parties don’t matter significantly in my model; what matters is how many high skill technical people are following the instructions of the conceptual entity beff represents an instance of.
That seems like a pretty uphill battle, because they already kind of vibe with Beff, and this would naturally prejudice them. How big/dangerous is e/acc, really? Are they getting worse? Maybe we should be choosing different battles.
Connor also has fans (like me) and Beff utterly failed to move me. Would Beff draw away the marginal rationalist with his performance? I kind of think not. But that’s maybe not the part that matters.
I had watched the whole thing and came away with a very different impression. From where I’m standing, Connor is just correct about everything he said, full stop. Beff made a few interesting points but was mostly incoherent, equivocating, and/or evasive. Connor tried very hard for hours to go for his cruxes rather than get lost in the weeds, but Beff wouldn’t let him. Maybe Connor could have called him on it more skillfully, but I don’t think I could have done any better. Maybe he’ll try a different tack if there’s a next time. The moderator really should have intervened.
This is the actual topic. It’s the Black Marble thought experiment by Bostrom, and the crux of the whole disagreement! Later on Connor called it rolling death on the dice. Non-ergodicity. Beff’s whole position seems to be to redefine “the good” to be “acceleration of growth”, but Connor wants to add “not when it kills you!”
Again, Connor is simply correct here. This is not a novel argument. It’s Goodhart’s Law. You get what you optimize, even if it’s only a proxy for what you want. The tails come apart. You can overshoot and get your proxy rather than your target. Remember, Beff’s position: “growth = good”, which is obviously (to me, Connor, and Eliezer) false. Connor tried very hard to lead Beff to see why, but Beff was more interested in muddying the waters than achieving clarity or finding cruxes.
Again, Connor is simply correct. This isn’t about virtue signaling at all; that completely misses the point. Beff is equivocating. Connor is trying to point out the distinct definitions required to separate the concepts so he can move the argument forward to the next step. Beff just wasn’t listening.
Is there something wrong with trying to understand the other position before making a point? No, and Beff should have tried harder to understand the other position. Kudos to Connor for trying. This is the Black Marble again (maybe a gray one in this case). Beff seems to have the naiive position that open source is an unmitigated good, which is obviously (to me and Connor) false, because infohazards. I don’t think F16s were a great example, but it could have been any number of other things.
Totally unfair characterization. I think this is Connor simply not understanding Beff’s position, rather than Connor doing anything underhanded. The question was not simply rhetorical, and the answer was important for updating Connor’s understanding (of Beff’s position). From Connor’s point of view, an intelligence explosion eats most of the future light cone anyway, so it’s not that different from a false vacuum collapse: everybody dies, and the future has no value. There are some philosophies that actually bite the bullet to remain consistent in the limit and actually want all humans to die. (Nick Land came up.) Connor thinks Beff’s might be that on reflection, but it’s not for the reason Connor thought here.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
OK, maybe that’s a signal (it’s certainly a quip), but the point is valid, stands, and Connor is correct. I am sympathetic to the libertarian philosophy, but the naiive application is incomplete and cannot stand on its own.
Finally? Connor has been talking about this the whole time. Black marble!
Yep. That was yesterday. Connor would be interested in talking all about why he thinks that and (as evidenced by the next quote) wants to know Beff’s criteria for when that point is, so Connor can move on and either explain why that point has already passed, or point out that Beff doesn’t have any criteria and will just go ahead and draw the black marble without even trying to prepare for it. (Which means everybody dies.)
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you’re prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor’s position (and mine and Bostrom’s).
That is a very old, very bad argument. If NASA discovered a comet big and fast enough to cause a mass extinction event that they estimated to have a 10% chance of colliding with Earth in 100 years, we shouldn’t start worrying about it until it’s about to hit us. Right? Or from the glass-half-full perspective, we’ve got a 90% chance of surviving anyway, so let’s just forget about the whole thing. Right? Do you understand how absurd that sounds?
But Connor (and Eliezer and I (and Hinton)) don’t think we have 100 years. We think it’s probably decades or less, maybe much less. And Connor (and Eliezer and I) don’t think we have a 90% chance of surviving by default. Quite the reverse, or even worse.
No shit. Not only that, but e/acc seems to be trying very hard to make the problem worse, by giving us even less time to prepare and sabotaging efforts to buy more.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn’t listening though.
Maybe describes Beff. Connor tried. Could’ve been better, but we have to start somewhere. Maybe they’ll learn from their mistakes and try again.
I was embarrassed by Connor’s headshot comment, which I thought was inappropriate. Thought experiments that could be interpreted as veiled death threats against one’s interlocutor are just plain rude. Could have been worded differently. I don’t think Connor actually meant it that way, and perfection is an unreasonable standard in a frustrating three-hour slog of a debate. But still bad form.
Besides that (which you didn’t even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. Should he have not gone for cruxes? Because that’s how progress gets made. Debaters can easily waste inordinate amounts of time on points that neither cares about (that don’t matter) because they happened to come up. Connor was laser focused on making some actual progress in the arguments, but Beff was being so damn evasive that he managed to waste a couple of hours anyway. It’s a shame, but this is so not on Connor. What do you even want from him?
For what it’s worth, I think you’re approaching this in good faith, which I appreciate. But I also think you’re approaching the whole thing from a very, uh, lesswrong.com-y perspective, quietly making assumptions and using concepts that are common here, but not anywhere else.
I won’t reply to every individual point, because there’s lots of them, so I’m choosing the (subjectively) most important ones.
No it’s not, and obviously so. The actual topic is AI safety. It’s not false vacuum, it’s not a black marble, or a marble of any color for that matter.
Connor wasn’t talking about the topic, he was building up to the topic using an analogy, a more abstract model of the situation. Which might be fair enough, except you can’t just assert this model. I’m sure saying that AI is a black marble will be accepted as true around here, but it would obviously get pushback in that debate, so you shouldn’t sneak it past quietly.
As I’m pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let’s say your goal is stopping AI progress. If you’re consistent, that means you’d want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it’s so transparent and I’m disappointed that you don’t see it.
Great! So state and defend and argue for this position, in this specific case of an unaligned superintelligence! Because the way he did it in a debate, was just by extrapolating whatever views Beff expressed, without care for what they actually are, and showing that when you push them to the extreme, they fall apart. Because obviously they do, because of Goodhart’s Law. But you can’t dismiss a specific philosophy via a rhethorical device that can dismiss any philosophy.
Again, I extremely strongly disagree, but I suspect that’s a mannerism common in rationalist circles, using additional layers of abstraction and pretending they don’t exist. Black marble isn’t the point of the debate. AI safety is. You could put forward the claim that “AI = black marble”. I would lean towards disagreeing, I suspect Beff would strongly disagree, and then there could be a debate about this proposition.
Instead, Connor implicitly assumed the conclusion, and then proceeded to argue the obvious next point that “If we assume that
AIblack marble will kill us all, then we should not build it”.Duh. The point of contention isn’t that we should destroy the world. The point of contention is that AI won’t destroy the world.
He’s not making a point. He’s again assuming the conclusion. You happen to agree with the conclusion, so you don’t have a problem with it.
The conclusion he’s assuming is: “Due to the nature of AI, it will progress so quickly going forward that already at this point we need to slow down or stop, because we won’t have time to do that later.”
My contention with this would be “No, I think AI capabilities will keep growing progressively, and we’ll have plenty of time to stop when that becomes necessary.”
This is the part that would have to be discussed. Not assumed.
Believe it or not, I actually agree. Sort of. I think it’s not good as an argument, because (for me) it’s not meant to be an argument. It’s meant to be an analogy. I think we shouldn’t worry about overpopulation on Mars because the world we live in will be so vastly different when that becomes an immediate concern. Similarly, I think we shouldn’t (overly) worry about superintelligent AGI killing us, because the state of AI technology will be so vastly different when that becomes an immediate concern.
And of course, whether or not the two situations are comparable would be up to debate. I just used this to state my own position, without going the full length to justify it.
I kinda agree here? But the problem is on both sides. Beff was awfully resistant to even innocuous rhethorical devices, which I’d understand if that started late in the debate, but… it took him like idk 10 minutes to even respond to the initial technology ban question.
At the same time Connor was awfully bad at leading the conversation in that direction. Let’s just say he took the scenic route with a debate partner who made it even more scenic.
Great question. Ideally, the debate would go something like this.
B: So my view is that we should accelerate blahblah free energy blah AI blah [note: I’m not actually that familiar with the philosophical context, thermodynamic gods and whatever else; it’s probably mostly bullshit and imo irrelevant]
C: Yea, so my position is if we build AI without blah and before blah, then we will all die.
B: But the risk of dying is low because of X and Y reasons.
C: It’s actually high because of Z, I don’t think X is valid because W.
And keep trying to understand at what point exactly they disagree. Clearly they both want humanity/life/something to proliferate in some capacity, so even establishing that common ground in the beginning would be valuable. They did sorta reach it towards the end, but at that point the whole debate was played out.
Overall, I’m highly disappointed that people seem to agree with you. My problem isn’t even whether Connor is right, it’s how he argued for his positions. Obviously people around here will mostly agree with him. This doesn’t mean that his atrocious performance in the debate will convince anyone else that AI safety is important. It’s just preaching to the choir.
I see what you’re saying, and yes, fully general counterarguments are suspect, but that is totally not what Connor was doing. OK, sure, instrumental goals are not terminal values. Stopping AI progress is not a terminal value. It’s instrumental, and hopefully temporary. Bostrom himself has said that stopping progress on AI indefinitely would be a tragedy, even if he does see the need for it now. That’s why the argument can’t be turned on Connor.
The difference is, and this is critical, Beff’s stated position (as far as Connor or I can tell) is that acceleration of growth equals the Platonic Good. This is not instrumental for Beff; he’s claiming it’s the terminal value in his philosophy, i.e., the way you tell what “Good” is. See the difference? Connor thinks Beff hasn’t thought this through, and this would be inconsistent with Beff’s moral intuitions if pressed. That’s the Fisher-Price Nick Land comment. Nick bit the bullet and said all humans die is good, actually. Beff wouldn’t even look.
It is, and Connor said so repeatedly throughout the conversation. AI safety is a subtopic, a special case, of Connor’s main thrust, albeit the most important one. (Machine transcript, emphasis mine.)
Non-ergodicity, not necessarily AI:
Connor explicitly calls out AGI as not his main point:
Beff starts talking before he could finish, so skipping ahead a bit:
This is Connor’s mindset in the whole debate. Backing up a bit:
Also the rolling death comment I mentioned previously. And the comment about crazy wackos.
So just to make this clear: a “black marble” is some kind of asymmetric technology. For example, a machine gun isn’t a black marble because for every gun that a person could buy or build themselves, large governments will have 100. A pandemic virus that with a high fatality rate after a lengthy delay and didn’t mutate to become less deadly* would be a black marble, because current technology makes it cheap and easy to build any string of RNA you want, while the hospital care to save one person is extremely labor and material intensive, and often fails. *(evolutionary forces want to make the virus shorter, removing it’s ability to kill after a delay, which is why this likely won’t work)
You feel confident that inside the total number of “marbles” between (1) right now and (2) humans develop off planet or interstellar colonies contains at least one black marble. And therefore if humans draw the marbles faster and faster, planning to leave the planet soon, they will pull a black one.
Ok. And then the counter argument would be that you’re probably wrong, because no black marbles have been drawn yet, and you would need to prove they exist before any action is taken about them? (and not to get sucked too far into the weeds, but most claims about a “superintelligence” are kinda like a fictional black marble that may simply not be that effective)
Beff’s whole love story to capitalism and thermodynamics to me seems like simply an argument that since the start of the industrial revolution, technology has been net good and no black marbles were drawn, therefore the right choice is to continue. And it’s a good argument without all the baggage, because it’s empirical. (and a fair counter would be how technology has only been ‘net good’ when various actions, mostly government, stopped it from only enriching the owners of coal mines while the miners lost their limbs and died from lung disease...)
I want someone who has any significant experience in highly adversarial debates where the point is to communicate to the audience why you think your interlocutor is not a good choice to ally with and has nothing to do with epistemics unless you can establish that social context. Connor failed to establish that social context in the presence of someone with high skill at destroying it. Beff won the debate, even though his arguments sucked. This does not make me agree with him.
But I don’t think beff would have accepted the debate if he didn’t expect to be able to win. I’m really frustrated with folks here for their blindness to how lopsided the debate was socio-emotionally.
What I’d look forward to is a debate with someone with significant experience establishing the epistemics frame, like, you know, an experienced professor. Eg, Bengio.
OK, that’s a fair enough ask. Do you have an alternative candidate in mind with approximately Connor’s position and said experience? If wishes were horses beggars could ride. Connor understands the arguments and the epistemics, to the point that (from my perspective) he’s doing an even better job at live debates than Yudkowsky. (You might not consider that a high bar.) The only way he gets more debate skill is more practice, or perhaps much more specific guidance than you have given. Maybe doesn’t have to be public, but would Beff have agreed otherwise? And who would critique them?
Not obviously true to me, although admitedly bad if so. I accept that my perspective might be biased here, as I went in already somewhat familiar with Connor’s arguments. But I can only call what I’m capable of seeing. What’s your evidence? Anything legible to me? Beff’s fan club in the YouTube comments (or on
TwitterX)? That’s not a good indicator of how a neutral party would see it, although I can see the comments themselves maybe skewing their perspective.I do not have an alternate candidate in mind besides Bengio, and I don’t know if we should expect to be able to get him to have a debate like this. If Connor were to ruthlessly drill this in debates with people who are capable of acting on Beff’s level of consistent bad faith but are actually friendly, that might do the trick, not sure. But he has to be open to feedback that I currently model him as not being: things like “that argument structure will not work”.
(It might be more effective to have Bengio debate Connor in a format like this, actually.)
The marginal fan club member is who I’m concerned about, so yeah, the edge of beff’s fan club is my threat model. Neutral parties don’t matter significantly in my model; what matters is how many high skill technical people are following the instructions of the conceptual entity beff represents an instance of.
That seems like a pretty uphill battle, because they already kind of vibe with Beff, and this would naturally prejudice them. How big/dangerous is e/acc, really? Are they getting worse? Maybe we should be choosing different battles.
Connor also has fans (like me) and Beff utterly failed to move me. Would Beff draw away the marginal rationalist with his performance? I kind of think not. But that’s maybe not the part that matters.