https://web.archive.org/web/20230304203835/http://www.hpmor.com/
gilch
You Need More Money
How to Lose a Fair Game
Charting Is Mostly Superstition
Does CFAR “eat its own dogfood”? Do the cognitive tools help in running the organization itself? Can you give concrete examples? Are you actually outperforming comparable organizations on any obvious metric due to your “applied rationality”? (Why ain’tcha rich? Or are you?)
Market Misconceptions
[Question] Should I take glucosamine?
I had watched the whole thing and came away with a very different impression. From where I’m standing, Connor is just correct about everything he said, full stop. Beff made a few interesting points but was mostly incoherent, equivocating, and/or evasive. Connor tried very hard for hours to go for his cruxes rather than get lost in the weeds, but Beff wouldn’t let him. Maybe Connor could have called him on it more skillfully, but I don’t think I could have done any better. Maybe he’ll try a different tack if there’s a next time. The moderator really should have intervened.
At some point they start building their respective cases—what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff’s side—what if there are dangerous aliens?
For the love of god, please talk about the actual topic.
This is the actual topic. It’s the Black Marble thought experiment by Bostrom, and the crux of the whole disagreement! Later on Connor called it rolling death on the dice. Non-ergodicity. Beff’s whole position seems to be to redefine “the good” to be “acceleration of growth”, but Connor wants to add “not when it kills you!”
About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone’s views are the extremist parodies of themselves. Embarrassing tbh.
Again, Connor is simply correct here. This is not a novel argument. It’s Goodhart’s Law. You get what you optimize, even if it’s only a proxy for what you want. The tails come apart. You can overshoot and get your proxy rather than your target. Remember, Beff’s position: “growth = good”, which is obviously (to me, Connor, and Eliezer) false. Connor tried very hard to lead Beff to see why, but Beff was more interested in muddying the waters than achieving clarity or finding cruxes.
He also points out, many many times, that “is” != “ought”, which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell.
Again, Connor is simply correct. This isn’t about virtue signaling at all; that completely misses the point. Beff is equivocating. Connor is trying to point out the distinct definitions required to separate the concepts so he can move the argument forward to the next step. Beff just wasn’t listening.
“Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn’t making a point”
Immediately followed by “If an AI could design an F16, should it be open-sourced?”
Is there something wrong with trying to understand the other position before making a point? No, and Beff should have tried harder to understand the other position. Kudos to Connor for trying. This is the Black Marble again (maybe a gray one in this case). Beff seems to have the naiive position that open source is an unmitigated good, which is obviously (to me and Connor) false, because infohazards. I don’t think F16s were a great example, but it could have been any number of other things.
So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
Totally unfair characterization. I think this is Connor simply not understanding Beff’s position, rather than Connor doing anything underhanded. The question was not simply rhetorical, and the answer was important for updating Connor’s understanding (of Beff’s position). From Connor’s point of view, an intelligence explosion eats most of the future light cone anyway, so it’s not that different from a false vacuum collapse: everybody dies, and the future has no value. There are some philosophies that actually bite the bullet to remain consistent in the limit and actually want all humans to die. (Nick Land came up.) Connor thinks Beff’s might be that on reflection, but it’s not for the reason Connor thought here.
It’s in line with what seems like Connor’s debate strategy—make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.
Thanks for that virtue signal, very valuable to the conversation.
OK, maybe that’s a signal (it’s certainly a quip), but the point is valid, stands, and Connor is correct. I am sympathetic to the libertarian philosophy, but the naiive application is incomplete and cannot stand on its own.
After about 2 hours and 40 minutes of the “debate”, it seems we finally got to the point!
Finally? Connor has been talking about this the whole time. Black marble!
If I were to respond to this myself, I’d say—at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely.
Yep. That was yesterday. Connor would be interested in talking all about why he thinks that and (as evidenced by the next quote) wants to know Beff’s criteria for when that point is, so Connor can move on and either explain why that point has already passed, or point out that Beff doesn’t have any criteria and will just go ahead and draw the black marble without even trying to prepare for it. (Which means everybody dies.)
To which Connor has another one of the worst debate arguments ever: “So when is the right time? When do we know?”
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you’re prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor’s position (and mine and Bostrom’s).
I don’t know when is the right time to stop overpopulation on Mars.
That is a very old, very bad argument. If NASA discovered a comet big and fast enough to cause a mass extinction event that they estimated to have a 10% chance of colliding with Earth in 100 years, we shouldn’t start worrying about it until it’s about to hit us. Right? Or from the glass-half-full perspective, we’ve got a 90% chance of surviving anyway, so let’s just forget about the whole thing. Right? Do you understand how absurd that sounds?
But Connor (and Eliezer and I (and Hinton)) don’t think we have 100 years. We think it’s probably decades or less, maybe much less. And Connor (and Eliezer and I) don’t think we have a 90% chance of surviving by default. Quite the reverse, or even worse.
In response, Connor resorts to yelling that “You don’t have a plan!”
No shit. Not only that, but e/acc seems to be trying very hard to make the problem worse, by giving us even less time to prepare and sabotaging efforts to buy more.
This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn’t listening though.
This was largely a display of tribal posturing via two people talking past each other.
Maybe describes Beff. Connor tried. Could’ve been better, but we have to start somewhere. Maybe they’ll learn from their mistakes and try again.
Poor performance from both of them, but particularly Connor’s behavior is seriously embarrassing to the AI safety movement.
I was embarrassed by Connor’s headshot comment, which I thought was inappropriate. Thought experiments that could be interpreted as veiled death threats against one’s interlocutor are just plain rude. Could have been worded differently. I don’t think Connor actually meant it that way, and perfection is an unreasonable standard in a frustrating three-hour slog of a debate. But still bad form.
Besides that (which you didn’t even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. Should he have not gone for cruxes? Because that’s how progress gets made. Debaters can easily waste inordinate amounts of time on points that neither cares about (that don’t matter) because they happened to come up. Connor was laser focused on making some actual progress in the arguments, but Beff was being so damn evasive that he managed to waste a couple of hours anyway. It’s a shame, but this is so not on Connor. What do you even want from him?
I stumbled across a comment about efficient markets in an old Michael Vassar interview
I think that when when I look at economics (another great example) you have a series of papers by Larry Summers and Brad DeLong in the early 90s that as far as I can tell drive a stake through the heart of the idea of efficient markets. They show that if you make the extremely minimal assumption of people not being perfectly capable of assessing how much risk is involved in an investment, there should be a systematic tendency for markets to become less efficient with time.
And as far as I can tell this behavior—this paper despite being done by pretty much the top people in economics—just got ignored and had no impact on the—or this series of papers—had no impact on the progression of the field. It was logically ironclad. That’s the sort of thing I basically expect from most sciences in the modern world—almost everything but applied physics—and it’s (you know) this is a particularly clear case though because you have essentially the strongest possible argument done by the most prestigious possible people with just no recollection of it ever having happened in the profession.
Does anybody know what papers he’s talking about? (I’m not sure if I transcribed the names properly.) They seem very relevant to this discussion.
How could blood clotting develop over time, step by step
Step-by-Step Evolution of Vertebrate Blood Coagulation.
Irreducible complexity arguments are pretty unconvincing at this point. The Theory of Evolution has already been proven beyond any reasonable doubt, and you would know this if you had objectively looked at both sides, as I have. Because you don’t already know better, I don’t think the dispute can be resolved at this level of argument. We have to take a step back and look at our disagreement in terms of epistemology.
Was that your true rejection? Hypothetically, if science had an answer to all of your irreducible complexity objections, would you then accept evolution, or is there some deeper reason you’re not telling us? Are you going where the evidence leads you, or did you write the bottom line first and work backwards from there?
Look, i know there are several atheists here that like to hide on this forum and erase any comment they don´t like, but i believe you can be open-minded
You must be new here. Nobody is hiding. The last community survey I saw shows we’re at least 70% atheist. If you’re out to “save our souls” and aren’t just trolling for fun, then I suggest you learn how to talk to us first. Posts that make obvious mistakes in reasoning that were already covered in the Sequences are going to get downvoted very quickly. Read what we’re about and play by our rules, because that’s the only way we’re going to listen.
What mistakes have you made at CFAR that you have learned the most from? (Individually or as an organization?)
I have never liked the “rat” nickname. I’m not a filthy rodent. I’ve never heard the term “rationalish” before now.
I’ve always resisted using the term “rationalism”. I feel like “-ism” is a misstep into politics (and already the name of the 17th-century anti-empericists). We practice “epistemic rationality” and “instrumental rationality”, together, “rationality”, not “rationalism”.
He’s back. Again. Maybe.
https://twitter.com/OpenAI/status/1727205556136579362
We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
Anyone know how Larry or Bret feel about x-risk?
While I do think that rhetoric is a skill worth developing, don’t forget that rhetorical tricks are Dark Arts.
Many so-called “Logical Fallacies” are unfortunately applied to arguments that are valid inferences. On priors, you are better off trusting experts in their field than laymen. But this is called the “argument from authority fallacy”. The correct counter is Argument Screens Off Authority. And so on. Learning Counterspells is no substitute for grokking Bayes, and may even be harmful if they just give you excuses not to listen or more ammunition to shoot your own foot with.
Also, someone should totally make a card game out of this.
The Wrong Side of Risk
It’s not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you’re using logarithms).
Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don’t say, “I don’t know.” You know a little.
A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam’s razor.)
The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
Solomonoff’s Lightsaber is the right way to think about this.
More direct evidence can “screen off” indirect evidence. If it’s along the same causal chain, you’re not allowed to count it twice.
Many so-called “logical fallacies” are correct Bayesian inferences.
Why is Google the biggest search engine even though it wasn’t the first? It’s because Google has a better signal-to-noise ratio than most search engines. PageRank cut through all the affiliate cruft when other search engines couldn’t, and they’ve only continued to refine their algorithms.
But still, haven’t you noticed that when Wikipedia comes up in a Google search, you click that first? Even when it’s not the top result? I do. Sometimes it’s not even the article I’m after, but its external links. And then I think to myself, “Why didn’t I just search Wikipedia in the first place?”. Why do we do that? Because we expect to find what we’re looking for there. We’ve learned from experience that Wikipedia has a better signal-to-noise ratio than a Google search.
If LessWrong and Wikipedia came up in the first page of a Google search, I’d click LessWrong first. Wouldn’t you? Not from any sense of community obligation (I’m a lurker), but because I expect a higher probability of good information here. LessWrong has a better signal-to-noise ratio than Wikipedia.
LessWrong doesn’t specialize in recipes or maps. Likewise, there’s a lot you can find through Google that’s not on Wikipedia (and good luck finding it if Google can’t!), but we still choose Wikipedia over Google’s top hit when available. What is on LessWrong is insightful, especially in normally noisy areas of inquiry.
(Except for the cards about concepts that fell to the replication crisis, of course.)
Do we have a list of those somewhere? I need to learn what to unlearn.
What can the LessWrong community do (or the broader rationality-aligned movement do) to help with CFAR’s mission?
Thus spake Eliezer: “Every Cause Wants to be a Cult”.
An organization promising life-changing workshops/retreats seems especially high-risk for cultishness, or at least pattern matches on it pretty well. We know the price of retaining sanity is vigilance. What specific, concrete steps are you at CFAR taking to resist the cult attractor?