https://web.archive.org/web/20230304203835/http://www.hpmor.com/
gilch
Does CFAR “eat its own dogfood”? Do the cognitive tools help in running the organization itself? Can you give concrete examples? Are you actually outperforming comparable organizations on any obvious metric due to your “applied rationality”? (Why ain’tcha rich? Or are you?)
I had watched the whole thing and came away with a very different impression. From where I’m standing, Connor is just correct about everything he said, full stop. Beff made a few interesting points but was mostly incoherent, equivocating, and/or evasive. Connor tried very hard for hours to go for his cruxes rather than get lost in the weeds, but Beff wouldn’t let him. Maybe Connor could have called him on it more skillfully, but I don’t think I could have done any better. Maybe he’ll try a different tack if there’s a next time. The moderator really should have intervened.
At some point they start building their respective cases—what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff’s side—what if there are dangerous aliens?
For the love of god, please talk about the actual topic.
This is the actual topic. It’s the Black Marble thought experiment by Bostrom, and the crux of the whole disagreement! Later on Connor called it rolling death on the dice. Non-ergodicity. Beff’s whole position seems to be to redefine “the good” to be “acceleration of growth”, but Connor wants to add “not when it kills you!”
About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone’s views are the extremist parodies of themselves. Embarrassing tbh.
Again, Connor is simply correct here. This is not a novel argument. It’s Goodhart’s Law. You get what you optimize, even if it’s only a proxy for what you want. The tails come apart. You can overshoot and get your proxy rather than your target. Remember, Beff’s position: “growth = good”, which is obviously (to me, Connor, and Eliezer) false. Connor tried very hard to lead Beff to see why, but Beff was more interested in muddying the waters than achieving clarity or finding cruxes.
He also points out, many many times, that “is” != “ought”, which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell.
Again, Connor is simply correct. This isn’t about virtue signaling at all; that completely misses the point. Beff is equivocating. Connor is trying to point out the distinct definitions required to separate the concepts so he can move the argument forward to the next step. Beff just wasn’t listening.
“Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn’t making a point”
Immediately followed by “If an AI could design an F16, should it be open-sourced?”
Is there something wrong with trying to understand the other position before making a point? No, and Beff should have tried harder to understand the other position. Kudos to Connor for trying. This is the Black Marble again (maybe a gray one in this case). Beff seems to have the naiive position that open source is an unmitigated good, which is obviously (to me and Connor) false, because infohazards. I don’t think F16s were a great example, but it could have been any number of other things.
So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
Totally unfair characterization. I think this is Connor simply not understanding Beff’s position, rather than Connor doing anything underhanded. The question was not simply rhetorical, and the answer was important for updating Connor’s understanding (of Beff’s position). From Connor’s point of view, an intelligence explosion eats most of the future light cone anyway, so it’s not that different from a false vacuum collapse: everybody dies, and the future has no value. There are some philosophies that actually bite the bullet to remain consistent in the limit and actually want all humans to die. (Nick Land came up.) Connor thinks Beff’s might be that on reflection, but it’s not for the reason Connor thought here.
It’s in line with what seems like Connor’s debate strategy—make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.
Thanks for that virtue signal, very valuable to the conversation.
OK, maybe that’s a signal (it’s certainly a quip), but the point is valid, stands, and Connor is correct. I am sympathetic to the libertarian philosophy, but the naiive application is incomplete and cannot stand on its own.
After about 2 hours and 40 minutes of the “debate”, it seems we finally got to the point!
Finally? Connor has been talking about this the whole time. Black marble!
If I were to respond to this myself, I’d say—at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely.
Yep. That was yesterday. Connor would be interested in talking all about why he thinks that and (as evidenced by the next quote) wants to know Beff’s criteria for when that point is, so Connor can move on and either explain why that point has already passed, or point out that Beff doesn’t have any criteria and will just go ahead and draw the black marble without even trying to prepare for it. (Which means everybody dies.)
To which Connor has another one of the worst debate arguments ever: “So when is the right time? When do we know?”
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you’re prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor’s position (and mine and Bostrom’s).
I don’t know when is the right time to stop overpopulation on Mars.
That is a very old, very bad argument. If NASA discovered a comet big and fast enough to cause a mass extinction event that they estimated to have a 10% chance of colliding with Earth in 100 years, we shouldn’t start worrying about it until it’s about to hit us. Right? Or from the glass-half-full perspective, we’ve got a 90% chance of surviving anyway, so let’s just forget about the whole thing. Right? Do you understand how absurd that sounds?
But Connor (and Eliezer and I (and Hinton)) don’t think we have 100 years. We think it’s probably decades or less, maybe much less. And Connor (and Eliezer and I) don’t think we have a 90% chance of surviving by default. Quite the reverse, or even worse.
In response, Connor resorts to yelling that “You don’t have a plan!”
No shit. Not only that, but e/acc seems to be trying very hard to make the problem worse, by giving us even less time to prepare and sabotaging efforts to buy more.
This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn’t listening though.
This was largely a display of tribal posturing via two people talking past each other.
Maybe describes Beff. Connor tried. Could’ve been better, but we have to start somewhere. Maybe they’ll learn from their mistakes and try again.
Poor performance from both of them, but particularly Connor’s behavior is seriously embarrassing to the AI safety movement.
I was embarrassed by Connor’s headshot comment, which I thought was inappropriate. Thought experiments that could be interpreted as veiled death threats against one’s interlocutor are just plain rude. Could have been worded differently. I don’t think Connor actually meant it that way, and perfection is an unreasonable standard in a frustrating three-hour slog of a debate. But still bad form.
Besides that (which you didn’t even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. Should he have not gone for cruxes? Because that’s how progress gets made. Debaters can easily waste inordinate amounts of time on points that neither cares about (that don’t matter) because they happened to come up. Connor was laser focused on making some actual progress in the arguments, but Beff was being so damn evasive that he managed to waste a couple of hours anyway. It’s a shame, but this is so not on Connor. What do you even want from him?
I stumbled across a comment about efficient markets in an old Michael Vassar interview
I think that when when I look at economics (another great example) you have a series of papers by Larry Summers and Brad DeLong in the early 90s that as far as I can tell drive a stake through the heart of the idea of efficient markets. They show that if you make the extremely minimal assumption of people not being perfectly capable of assessing how much risk is involved in an investment, there should be a systematic tendency for markets to become less efficient with time.
And as far as I can tell this behavior—this paper despite being done by pretty much the top people in economics—just got ignored and had no impact on the—or this series of papers—had no impact on the progression of the field. It was logically ironclad. That’s the sort of thing I basically expect from most sciences in the modern world—almost everything but applied physics—and it’s (you know) this is a particularly clear case though because you have essentially the strongest possible argument done by the most prestigious possible people with just no recollection of it ever having happened in the profession.
Does anybody know what papers he’s talking about? (I’m not sure if I transcribed the names properly.) They seem very relevant to this discussion.
How could blood clotting develop over time, step by step
Step-by-Step Evolution of Vertebrate Blood Coagulation.
Irreducible complexity arguments are pretty unconvincing at this point. The Theory of Evolution has already been proven beyond any reasonable doubt, and you would know this if you had objectively looked at both sides, as I have. Because you don’t already know better, I don’t think the dispute can be resolved at this level of argument. We have to take a step back and look at our disagreement in terms of epistemology.
Was that your true rejection? Hypothetically, if science had an answer to all of your irreducible complexity objections, would you then accept evolution, or is there some deeper reason you’re not telling us? Are you going where the evidence leads you, or did you write the bottom line first and work backwards from there?
Look, i know there are several atheists here that like to hide on this forum and erase any comment they don´t like, but i believe you can be open-minded
You must be new here. Nobody is hiding. The last community survey I saw shows we’re at least 70% atheist. If you’re out to “save our souls” and aren’t just trolling for fun, then I suggest you learn how to talk to us first. Posts that make obvious mistakes in reasoning that were already covered in the Sequences are going to get downvoted very quickly. Read what we’re about and play by our rules, because that’s the only way we’re going to listen.
What mistakes have you made at CFAR that you have learned the most from? (Individually or as an organization?)
I have never liked the “rat” nickname. I’m not a filthy rodent. I’ve never heard the term “rationalish” before now.
I’ve always resisted using the term “rationalism”. I feel like “-ism” is a misstep into politics (and already the name of the 17th-century anti-empericists). We practice “epistemic rationality” and “instrumental rationality”, together, “rationality”, not “rationalism”.
He’s back. Again. Maybe.
https://twitter.com/OpenAI/status/1727205556136579362
We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
Anyone know how Larry or Bret feel about x-risk?
While I do think that rhetoric is a skill worth developing, don’t forget that rhetorical tricks are Dark Arts.
Many so-called “Logical Fallacies” are unfortunately applied to arguments that are valid inferences. On priors, you are better off trusting experts in their field than laymen. But this is called the “argument from authority fallacy”. The correct counter is Argument Screens Off Authority. And so on. Learning Counterspells is no substitute for grokking Bayes, and may even be harmful if they just give you excuses not to listen or more ammunition to shoot your own foot with.
Also, someone should totally make a card game out of this.
It’s not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you’re using logarithms).
Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don’t say, “I don’t know.” You know a little.
A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam’s razor.)
The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
Solomonoff’s Lightsaber is the right way to think about this.
More direct evidence can “screen off” indirect evidence. If it’s along the same causal chain, you’re not allowed to count it twice.
Many so-called “logical fallacies” are correct Bayesian inferences.
(Except for the cards about concepts that fell to the replication crisis, of course.)
Do we have a list of those somewhere? I need to learn what to unlearn.
What can the LessWrong community do (or the broader rationality-aligned movement do) to help with CFAR’s mission?
Why is Google the biggest search engine even though it wasn’t the first? It’s because Google has a better signal-to-noise ratio than most search engines. PageRank cut through all the affiliate cruft when other search engines couldn’t, and they’ve only continued to refine their algorithms.
But still, haven’t you noticed that when Wikipedia comes up in a Google search, you click that first? Even when it’s not the top result? I do. Sometimes it’s not even the article I’m after, but its external links. And then I think to myself, “Why didn’t I just search Wikipedia in the first place?”. Why do we do that? Because we expect to find what we’re looking for there. We’ve learned from experience that Wikipedia has a better signal-to-noise ratio than a Google search.
If LessWrong and Wikipedia came up in the first page of a Google search, I’d click LessWrong first. Wouldn’t you? Not from any sense of community obligation (I’m a lurker), but because I expect a higher probability of good information here. LessWrong has a better signal-to-noise ratio than Wikipedia.
LessWrong doesn’t specialize in recipes or maps. Likewise, there’s a lot you can find through Google that’s not on Wikipedia (and good luck finding it if Google can’t!), but we still choose Wikipedia over Google’s top hit when available. What is on LessWrong is insightful, especially in normally noisy areas of inquiry.
Seriously, WHO, could you people be any less helpful? We all agreed on Omicron and now we have to type Omicron all the time? Couldn’t even use Xi?
Replaced “Nu” one too many times?
Epistemic status: I am not a financial advisor. Please double-check anything I say before taking me seriously. But I do have a little experience trading options. I am also not telling you what to do, just suggesting some (heh) options to consider.
Your “system 1” does not know how to trade (unless you are very experienced, and maybe not even then). Traders who know what they are doing make contingency plans in advance to avoid dangerous irrational/emotional trading. They have a trading system with rules to get them in and out. Whatever you do, don’t decide it on a whim. But doing nothing is also a choice.
Options are derivatives, which makes their pricing more complex than the underlying stock. Options have intrinsic value, which is what they’re worth if exercised immediately, and the rest is extrinsic value, which is their perceived potential to have more intrinsic value before they expire. Options with no intrinsic value are called out of the money. Extrinsic value is affected by time remaining and the implied volatility (IV), or the market-estimated future variance of the underlying. When the market has a big selloff like this, IV increases, which inflates the extrinsic value of options. And indeed, IV is elevated well above normal now. High IV conditions like this do not tend to last long (perhaps a month). When IV reverts to the mean, the option’s extrinsic value will be deflated. You should not be trading options with no awareness of IV conditions.
If you are no longer confident in your forecast, it may be prudent to take some money off the table. You can sell your option at a profit and then put the money in a different position that you like better. Perhaps a different strike or expiration date, or something else entirely.
A “safe haven” investment is one that traders tend to buy when the stock market is falling. For example, TLT (a long-term treasury bond ETF), has shot up due to the current market crisis, but it is also a suitable investment vehicle in its own right, with buy-and-hold seeing positive returns in the long term, so it can hold value even after the market turns around. But being a bond fund with lower volatility, its returns are likewise lower.
On the other hand, if you are more confident in your forecast and want to double down, you could close one of your puts and use some of the profits from your put to buy two puts at a lower strike. (Maybe out of the money for their Gamma*). If your forecast is correct, and the market continues to fall rapidly, you’ll gain profit even faster, but if you’re wrong and the market turns around, they may expire worthless. Keep in mind that these puts are more expensive than normal due to high IV, even considering the current underlying price. If the market regains confidence, they’ll deflate in value, even before the market turns around. Options with less extrinsic value are less affected by IV. (IV sensitivity is known as Vega.)
If you have a margin account, you could take advantage of the high IV conditions by selling call spreads. You would sell the call with a Delta* of ~.3 and simultaneously buy another call one strike higher up to cap your losses if you’re wrong (this also reduces the margin required). This will be for a net credit. If the market continues to fall, you can let the whole spread expire worthless and keep the credit, or buy it back early for less than the credit (maybe for half) and then reposition. If you’re not terribly wrong and the market goes sideways or even slightly up, you can still buy these back for less than you paid for them due to deflating extrinsic as expiration nears and IV falls (due to market stabilization). If you are wrong and the spread goes under, your max loss is limited to your original margin (the difference between strikes, less the initial credit).
[*Delta is a measure of sensitivity to the price of the underlying. It’s also a rough estimate of the probability that the option will have any intrinsic value at expiration. Gamma is the rate of change of Delta. Together with Theta (time sensitivity) and Vega, these are known as The Greeks, and should be available from your broker along with the option quotes.]
That’s if you’re counting the cerebellum, which doesn’t seem to contribute much to intelligence, but is important for controlling the complicated musculature of a trunk and large body.
By cortical neuron count, humans have about 18 billion, while elephants have less than 6, comparable to a chimpanzee. (source)
Elephants are undeniably intelligent as animals go, but not at human level.
Even blue whales barely approach human level by cortical neuron count, although some cetaceans (notably orcas) exceed it.
I vaguely remember being not on board with that one and downvoting it. Basics of Rationalist Discourse doesn’t seem to get to the core of what rationality is, and seems to preclude other approaches that might be valuable. Too strict and misses the point. I would hate for this to become the standard.
[NORMAL] I don’t. johnswentworth’s The Apprentice Experiment, which kicked off this topic, specifically referenced Selection Has A Quality Ceiling.
To me, this looks like selection for agency more than training for it. The whole point of doing apprenticeships was to reverse this so we could break through the quality ceiling. We’ve lost purpose here.
Don’t know yet. I’ve watched about half so far. My first impressions are similar to DPiepgrass.
Typical conspiracy theorists are fairly easy to recognize. They seem to take the axiom that everything happens on purpose. They don’t notice the inconsistencies in their own models, and their bald assertions often don’t stand up to easy verification, if you bother to check.
These are not crazy conspiracy-theory types. (That doesn’t make them right.) They understand scientific thinking, are using the biology vocabulary correctly, and are trying to use gears-level models. They understand how the vaccines work, and what might go wrong. They accept the possibility that this isn’t happening on purpose, but is just a bad outcome of incentives, something we already believe happens.
Kirsch (blue shirt guy) seems less careful than the other two, and may or may not be a crackpot. This doesn’t necessarily make his concerns wrong. We should still try to verify their claims. Are these guys who they say they are? Do they have valid credentials? Does the spike protein break off so it could have systemic effects? How toxic is it? The vaccine might still win a cost-benefit analysis.
I’ve watched IDW videos before. They’re an interesting bunch, some of them might even be rationalist adjacent, but this varies. They seem to like long conversations.
Whether or not this case has merit, the systematic censorship thing seems real to me. We’ve had measles outbreaks here in the U.S., despite having an effective vaccine. This is mainly due to the antivaxxers swallowing bullshit, and there’s been a mainstream pushback. But Arguments Are Soldiers, so even when the antivaxxers have a point, the mainstream isn’t allowed to admit it, especially in the face of the clear and present danger posed by the current pandemic.
The media’s recent about-face on the lab-leak hypothesis is a recent example of this effect: it was on the “wrong” side politically, even though it had merit. Weak evidence is still evidence, and the truth doesn’t become a lie just because the Enemy says it. Social media has been (fairly) blamed for spreading conspiracy theories, and so under pressure to take responsibility, they’re trying to control the damage using blunt instruments, even if that means causing some collateral damage themselves.
How is a rational scientist supposed to navigate this environment? Often the answer has been “study something that isn’t (politically) radioactive instead”. That’s not good enough this time.
Thus spake Eliezer: “Every Cause Wants to be a Cult”.
An organization promising life-changing workshops/retreats seems especially high-risk for cultishness, or at least pattern matches on it pretty well. We know the price of retaining sanity is vigilance. What specific, concrete steps are you at CFAR taking to resist the cult attractor?