Why should the term “the sequences” even be in the title? What does it tell an uninformed reader? Does it have any useful meaning for anyone who hasn’t already read them? (Why are they even called that, anyway? I mean… I guess it’s just that it was a sequence of blog posts?) In what way is “The Sequences” or “[Some title]: the Sequences” better than “The Blog Posts” or “The Diary Entries”?
evand
I really don’t see it as much of a hatchet job. It reads to me like “these people are a bit strange, but interesting”, which I have trouble taking offense at. Certainly it picks and chooses the “interesting” stuff, but it doesn’t strike me as particular worse than normal human interest stories (judging by the very limited sample of news articles that I have close personal knowledge of the subjects of).
I suspect if this was actually a hatchet job (as in, the reporter really was intentionally trying to make LW look bad, or really didn’t like someone), it would be a lot worse.
Calling it a hatchet job seems… disingenuous. Especially given that I don’t see many specific objections being raised. Sure, it could be better, and it’s not something an insider would have written. But neither of those surprises me, based on what I know about journalists and news articles.
Vote here if you think the policy is a net negative.
Has your friend ever written a quining program? Is his argument also an argument against the existence of such? What does he see as the difference between “understand” and “be capable of fully specifying”?
I suspect that, for anyone who has written (or at least studied in detail) a quining program, and has fully specified a definition of “understand” by which the program either does or does not understand itself, the question will be dissolved, and cease to hold much interest.
In other words, I don’t believe you need to invoke arbitrarily deep recursion to make the argument. I think you just need to specify that the co-brain be a quining computer system, to whatever level of fidelity is required to make you happy.
It’s fine until you change a vague statement about “most” relationships (which obviously means outgroup-people’s relationships) into a specific one about people in the conversation, or friends of people in the conversation, or other ingroup members. At which point, I’d say it’s just offensive, not taboo. Offensive, hard to justify, based on the outside view when people with inside view information are around… yeah, probably instrumentally unwise to say most of the time, too.
Do you feel that this is an example of you being intolerant of other posters’ tolerance of trolls? If not, why?
Personally, it seems to me that it is, but that it might well be justified anyway. I’m not a big fan of the approach taken, but I’m not yet completely against it either. I’m disappointed that it was implemented unilaterally.
The oxidizing atmosphere is not due to chance. It was created by early life that exhaled oxygen, and killed off its neighbors that couldn’t handle it. Hence, I don’t think the goldilocks oxygen levels speak much to great filter questions.
Early in civilization, we used wood and charcoal as energy sources. Blacksmithing and cast iron were originally done with wood charcoal. Cast iron is a very important tool in our history of machine tools and hence the industrial revolution. It’s possible that we could have carried on without coal, instead using large-scale forestry management or other biomass as our energy source. In the early 1700s there were already environmental concerns about deforestation. They were more related to continued supply of wood for charcoal and hunting grounds than “ecological” concerns, but there were still laws and regulations enacted to deal with the problem.
How many people do we need to support a high-tech civilization? I suspect fewer than we tried it with. It’s quite possible that biofuel sources would have produced a high tech civilization, just slower and with fewer people.
Also, note that biofuels can produce all the lubricants and plastics you need just fine. The Fischer-Tropsch process has been implemented on a large scale before.
I think that given all this, you could get the modern metal lathe and the steam engine without fossil fuels. We already harnessed basic water and wind power without fossil fuels. I suspect with modern machine tools you get to electricity and large-scale water and wind power generation, even without fossil fuels. Again, more slowly, and possibly without so many people, but I think you can get there.
The value of a QALY n years from now is equal to the value of a QALY now, multiplied by a discount function. An exponential discount function would be of the form (1-r)^n, with 0 < r < 1. This is the same concept as interest rate discounting in economics. There, a payment of $1 per year from now to eternity would be assigned a finite value of $1/r, where r is the interest rate. For example, if the interest is 5%, then $1 per year has the same value as $20 now.
You can apply the same discounting to QALYs, and there are some good reasons both to do so, and to do so with a specifically exponential discounting function. If you fail to do so, then anything that even trivially reduces your odds of living forever is unboundedly bad, which seems odd. Once you have attained probable immortality, you could no longer rationally take any risk whatsoever, even if the payoff is significant. For example, you couldn’t engage in manned interstellar exploration beyond easy reach of the absolute best available medical facilities, even if you were fairly confident of survival and could take a merely excellent hospital with you.
Failing to use an exponential discounting function means that your decision involving risk of death will be subject to akrasia, which hardly seems desirable.
In conclusion, the only question remaining is what discounting rate to use. 5% seems a bit nearsighted. We might compare to life expectancy without counting natural causes, which is something like 400 years. So a discount rate in the range of 0.01% to 0.5% seems plausible.
In order for a 1:3000 risk of true death by someone signed up for a cryonics program with 100% odds of success to make sense, in return for a 10 QALY (certain) life extension of a non-cryonics program person, the discount rate would have to be less than 0.3%. I’m actually rather surprised by this result; when I started writing this post, I expected to conclude that RomeoStevens was being selfish, arrogant, and overly disparaging of the value of people who haven’t signed up for cryonics.
That said, if we take the odds of success of cryonics as less than 100%, the equation changes. It now depends on both RomeoStevens current non-cryonics life expectancy and the odds of success of the cryonics program. If, for example, the cryonics program has odds of success of merely 10%, then at a 0.1% discount rate, the indefinite lifespan is comparable to 100 QALYs at present. That means a risk as high as 10% in exchange for a certain 10 QALYs would be reasonable.
I think I can safely conclude that either RomeoStevens thinks cryonics has a higher chance of working than I do, or that he is using a very small discount rate for long lifespans.
(EDIT: fixed r vs 1-r confusion. A discount rate of 1% implies that a QALY (or $) n years in the future is valued at 0.99^n times its present value. IOW, discount rate r → discount function (1-r)^n. I believe the post now uses all such terms in an internally consistent fashion.)
If you’re looking for something that means “more than half”, then “mostly” is a good choice. If you’re looking for something that means “less than half”, then you have the problem that ” true” means less true than ” false”.
To frame it from the “capitalist virtues” perspective...
If you squint a bit, your version sounds a lot like “we’re going to create a lot of value for a lot of people, in a way that is neatly measured in dollars, and therefore we can’t possibly make a for-profit company.” That is… really weird, from where I sit.
Alternate perspective: if you’re creating a lot of value for a lot of people, but you can’t extract any of it to compensate yourself for the infrastructure you build and the risks you take building it, are you actually really sure you’re creating as much value as you thought you were?
More people signing up reduces the social stigma attached to being signed up.
Convincing people to sign up lets you write articles about how you did it, which generate karma.
Having more people signed up increases knowledge about cryonics generally, and increases the odds that your wishes will be followed on your death.
People who you convinced to sign up sooner who then die promptly may feel obligated to you after they’re revived.
Taboo “rationally”.
I think the question you want is more like: “how can one have well-calibrated strong probabilities?”. Or maybe “correct”. I don’t think you need the word “rationally” here, and it’s almost never helpful at the object level—it’s a tool for meta-level discussions, training habits, discussing patterns, and so on.
To answer the object-level question… well, do you have well-calibrated beliefs in other domains? Did you test that? What do you think you know about your belief calibration, and how do you think you know it?
Personally, I think you mostly get there by looking at the argument structure. You can start with “well, I don’t know anything about proposition P, so it gets a 50%”, but as soon as you start looking at the details that probability shifts. What paths lead there, what don’t? If you keep coming up with complex conjunctive arguments against, and multiple-path disjunctive arguments for, the probability rapidly goes up, and can go up quite high. And that’s true even if you don’t know much about the details of those arguments, if you have any confidence at all that the process producing those is only somewhat biased. When you do have the ability to evaluate those in detail, you can get fairly high confidence.
That said, my current way of expressing my confidence on this topic is more like “on my main line scenarios...” or “conditional on no near-term giant surprises...” or “if we keep on with business as usual...”. I like the conditional predictions a lot more, partly because I feel more confident in them and partly because conditional predictions are the correct way to provide inputs to policy decisions. Different policies have different results, even if I’m not confident in our ability to enact the good ones.
English classes are usually designed to teach skills like reading comprehension, critical thinking, and writing. There is no particular need for the subject matter to be historical literature, and discussions of topics like this would fit right in.
In fact, some English teachers try to do just that, by selecting literature with the appropriate subject matter.
No link, no mention of what Modafinil is or why I should care, not even a name of the insurance company? You haven’t even assigned a likelihood to your prediction, which leaves it unworthy of Predictionbook, never mind a LW post.
I’d like to think that if Randall Munroe did a comic about LW, the humor would more specifically target LW.
I think the short version is that you don’t need math that covers the wavefunction collapse, because you don’t need the wave function to collapse.
For a longer version, you’d need someone who knows more QM than I do.
This approach to debating strikes me as exemplifying everything bad that I learned in high school policy debate. Specifically, it seems to me like debate distilled down to a status competition, with arguments as soldiers and the goal being for your side to win. For status competitions, signaling of intellectual ability, and demonstrating your blue or green allegiance, this works well. What it does not sound like, to me, is someone who is seeking the truth for herself. If you engaged in a debate with someone of lesser rhetorical skill, but who was also correct on an issue where you were incorrect (perhaps not even the main subject of the debate, but a small portion), would you notice? Would you give their argument proper attention, attempt to fix your opponent’s arguments, and learn from the result? Or would you simply be happy that you had out-debated them, supported all your soldiers, killed the enemy soldiers, and “won” the debate? Beware the prodigy of refutation.
Speaking as someone who runs a meetup: thanks!
I often procrastinate on putting up the meetup post. I’m not entirely sure why. It makes me happy when people notice that I put in the effort to do so and upvote the post, and (I think) makes it a little easier to do next time. It still feels kinda silly that I care about the karma number, but I do. And, as they say: if it’s silly, and it works, it isn’t silly.
This is very neat work, thank you. One of those delightful things that seems obvious in retrospect, but that I’ve never seen expressed like this before. A few questions, or maybe implementation details that aren’t obvious:
For complicated proofs, the fully formally verified statement all the way back to axioms might be very long. In practice, do we end up with markets for all of those? Do they each need liquidity from an automated market maker? Presumably not if you’re starting from axioms and building a full proof, and that applies to implications and conjunctions and so on as well, because the market doesn’t need to keep tracking things that are proven. However:
First Alice, who can prove , produces many many shares of for free. This is doable if you have a proof for by starting from a bunch of free shares and using equivalent exchange. She sells these for $0.2 each to Bob, pure profit.
In order for this to work, the market must be willing to maintain a price for these shares in the face of a proof that they’re equivalent to . Presumably the proof is not yet public, and if Alice has secret knowledge she can sell with a profit-maximizing strategy.
She could simply not provide the proof to the exchange, generating and pairs and selling only the latter, equivalent to just investing in A, but that requires capital. It’s far more interesting if she can do it without tying up the capital.
So how does the market work for shares of proven things, and how does the proof eventually become public? Is there any way to incentivize publishing proofs, or do we simply get a weird world where everyone is pretty sure some things are true but the only “proof” is the market price?
If there are a large number of true-but-not-publicly-proven statements, does that impose a large computational cost on the market making mechanism?
Second question: how does this work in different axiom systems? Do we need separate markets, or can they be tied together well? How does the market deal with “provable from ZFC but not Peano”? “Theorem X implies corollary Y” is a thing we can prove, and if there’s a price on shares of “Theorem X” then that makes perfect sense, but does it make sense to put a “price” on the “truth” of the ZFC axioms?
Presumably if we have a functional market that distinguishes Peano proofs from ZFC proofs, we’d like to distinguish more axiom sets. What happens if someone sets up an inconsistent axiom set, and that inconsistency is found? Presumably all dependent markets become a mess and there’s a race to the exits that extracts all the liquidity from the AMMs; that seems basically fine. But can that be contained to only those markets, without causing weird problems in Peano-only markets?
Probably some of this would be clearer if I knew a bit more about modern proof formalisms.
“Physicist motors” makes little sense because that position won out so completely that the alternative is not readily available when we think about “motor design”. But this was not always so! For a long time, wind mills and water wheels were based on intuition.
But in fact one can apply math and physics and take a “physicist motors” approach to motor design, which we see appearing in the 18th and 19th centuries. We see huge improvements in the efficiency of things like water wheels, the invention of gas thermodynamics, steam engines, and so on, playing a major role in the industrial revolution.
The difference is that motor performance is an easy target to measure and understand, and very closely related to what we actually care about (low Goodhart susceptibility). There are a bunch of parameters—cost, efficiency, energy source, size, and so on—but the number of parameters is fairly tractable. So it was very easy for the “physicist motor designers” to produce better motors, convince their customers the motors were better, and win out in the marketplace. (And no need for them to convince anyone who had contrary financial incentives.)
But “discourse” is a much more complex target, with extremely high dimensionality, and no easy way to simply win out in the market. So showing what a better approach looks like takes a huge amount of work and care, not only to develop it, but even to show that it’s better and why.
If you want to find it, the “non-physicist motors” camp is still alive and well, living in the “free energy” niche on YouTube among other places.