For my own reference.
Brief timeline of notable events for LW2:
2017-09-20 LW2 Open Beta launched
(2017-10-13 There is No Fire Alarm published)
(2017-10-21 AlphaGo Zero Significance post published)
2017-10-28 Inadequate Equilibria first post published
(2017-12-30 Goodhart Taxonomy Publish) ← maybe part of January spike?
2018-03-23 Official LW2 launch and switching of www.lesswrong.com to point to the new site.
In parentheses events are possible draws which spiked traffic at those times.
What’s the mechanism of action? If LW doesn’t die it will eventually achieve its aims because .. ?
I think you really answered it, as long as you don’t die:
you get to continuing swinging at ideas until you hit a home run.
Of course, this works a lot better if you’re learning as you go and your successive attempts fall closer to success than if you’re randomly trying things. I’d like to think that the LW isn’t randomly swinging. There are also hard pieces like keeping the team engaged and optimistic even as things don’t seem dramatically different (though you could call that part of not dying).
One should expect a site that was providing a lot of value to its users to grow, even if it wasn’t explicitly trying to.
Yes, I do expect if you’re generating enough value you will should see automatic growth, from which I infer that LessWrong 2.0 isn’t providing that much value to its users right now. Though I think there’s a mix of reasons to not be especially pessimistic:
Successful companies with worthwhile products seem to me to still have to invest in getting new users. My feeling (not really backed by data) is that you have to be outstandingly good to get full-on organic growth without trying. Not being there doesn’t mean you’re not providing value.
We see in the graphs that LW was not growing for most of its history: most of the metrics peak around 2011 and remain steady or decline slowly until 2015. I would argue that despite not growing, LW was still providing a lot of value to its users and the world during this period.
My outside view and inside view lead me believe hockey stick growth to be real. Part of my model is that even if you’re doing many things right, it might require having all the pieces click into place before dramatic growth starts. The pieces are connected in series, not parallel. Relatedly, sometimes the key to winning big is just not dying for long enough.
LW2 is much more fussy about which value we provide to which users than I expect most companies are. Most companies are trying to find approximately any product and corresponding set of users such that the value provided to users can be used to extract money somehow. In contrast, I care only about finding products and users to whom providing value will generate significant value for the world at large (particularly through the development/training of rationality and general intellectual progress on important problems). I think this is a much more restrictive constraint. It leads me (and I think the team generally) to want to forego many opportunities for user/activity growth if we don’t think they’ll lead to greater value for the world overall. Because of this, I’m not worried yet that we haven’t hit on a formula for providing value that’s organically getting a lot of growth. We have a narrow target.
Generally, I (and others on the team) don’t consider LW to have achieved the nonprofit analog of product-market fit. More precisely, we haven’t hit upon a definite and scalable mechanism for generating large amounts of value for the world (especially intellectual progress). I have an upcoming post I wish I could link to which describes various ideas we’re trying or thinking about as mechanisms. Open Questions is one such attempt.
Perhaps most of the value of the site is in the fact that it has posts, comments and votes. Beyond that it’s the value of the content, and that is modest and static.
I’m unsure of your meaning here. Are you saying there’s content separate from posts and comments? I consider all our content to fall into those categories. Some of it is arguably static, but I’m not sure I’d say modest? Can you say more what you meant by that?
In an Open Philanthropy Project blog post, The Moral Value of the Far Future, Holden Karnofsky mentions Nick Bostrom’s Astronomical Waste argument to say that he does not consider it robust enough to play an overwhelming role in his belief systems and actions.
In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations. I see no obvious analytical flaw in this claim, and give it some weight. However, because the argument relies heavily on specific predictions about a distant future, seemingly (as far as I can tell) backed by little other than speculation, I do not consider it “robust,” and so I do not consider it rational to let it play an overwhelming role in my belief system and actions [emphasis added].
Admittedly, Karnofsky proceeds to say that even if he fully accepted the reasoning, he isn’t sure what implications it would have.
In addition, if I did fully accept the reasoning of “Astronomical Waste” and evaluate all actions by their far future consequences, it isn’t clear what implications this would have. As discussed below, given our uncertainty about the specifics of the far future and our reasons to believe that doing good in the present day can have substantial impacts on the future as well, it seems possible that “seeing a large amount of value in future generations” and “seeing an overwhelming amount of value in future generations” [emphasis added] lead to similar consequences for our actions.
Nonetheless, I suspect that were we to have a non-speculative, robust case about what is possible that this well might push our behavior in particular directions. For instance, perhaps we find that Bostrom’s 10^38 humans lost per century of delay is extremely speculative, yet 10^20 is eminently attainable. I believe that if did have a robust case for the latter, this would shift the prioritization of some and likely bolster the altruistic motivation of those who right now are primarily sustained by the speculative plausibility of Bostrom’s extreme case.
Perhaps more importantly, if we are unable to even establish a firm lower bound much above what the Earth alone could sustain long-term, then those who have made Astronomical Waste arguments part of their belief systems and actions have reason to pause and reconsider how they should update given that the potential of space colonization might be much weaker than previously hoped.
A week or so ago Arbital was working but had load times of several minutes.
For some destinations, but not for most of them (I’m pretty sure). At least Eternity in Six Hours spends a great detail of time discussing deceleration.
The answer to this question likely depends heavily on what we consider to be adequate colonization:
1) Running computation in others star systems, i.e. Running digital minds on computers or other computational processes. (this is what Eternity in Six Hours assumes)
2) Having actual, ordinary biological humans colonize the stars.
There are challenges common and separate to each.
Lasting the Journey
In either case, you must be able to create a probe (to use the language of Eternity in Six Hours) which can last the duration for a trip which lasts thousands to millions of years. Is it at all feasible to have humans last long in some form? (Perhaps only as embryos which can be “grown” upon arrival, but even then, can we safely preserve biological material for millenia?) Could cryonics somehow be a solution? Even if you were only sending computers/robots, can we build electrical and mechanical devices which won’t break down after such extremely long time periods?
Challenges for Humans
Nick Beckstead’s prelimenary notes mention microgravity, cosmic radiation, health and reproduction in space, and genetic diversity as considerations which come into play when sending live humans through space.
Challenges for Computers
Can we build machines (assume non-AGI) we can solve all the problems they will encounter in different systems?
How fast you need to go unsurprisingly depends on quickly you need to get there. I’ve estimated that 100kly is larger than the distance to most places within the Milky Way.
Travelling at 99%c, you can cover that in ~100,000 years.
Travelling at 50%c, you can cover that in 200,000 years.
Travelling at 10%c, you can cover that distance in 1,000,000 years.
Travelling at 1%c, you can cover that distance in 10,000,000 years.
Recall that there are at least tens of millions of stars in the Milky Way. There are probably many stars within 50kly or even 25kly of Earth.
Nonetheless, these distances mean that even at extremely fast speeds it would still take tens of thousands of years to millions of years. This may or may not be a problem. The universe will probably last for at least another few billion years, compared to which a million years is not much at all. The question is whether your expedition can survive that long between stars. (It might make a big difference whether you are sending only digital machines or humans too.)
What are the distances?
Taking stats from its Wikipedia entry, the Milky Way has a diameter of 150-200 kly (kilolightyears), however:
The disk of stars in the Milky Way does not have a sharp edge beyond which there are no stars. Rather, the concentration of stars decreases with distance from the center of the Milky Way. For reasons that are not understood, beyond a radius of roughly 40,000 ly (13 kpc) from the center, the number of stars per cubic parsec drops much faster with radius. - Wikipedia
However, I will assume that the upper bound given of 200kly captures most of the 100-400 billion stars.
Our sun is 26.4 ± 1.0 kly from the Galactic Center (see image). It might be difficult to travel through the center of the galaxy, but let’s assume that the distance you travel to get anywhere in Milk Way from our sun is no more than traveling to the Galactic Center (~25ky) plus the upper bound of the radius (~100kly), so approximately 125kly. That’s the distance to the outer edge so actually the vast majority of destinations should be less than that. One could do some fancier trigonometry to get exact numbers and nice averages, but this gives us the order of magnitude: ~100kly to travel almost anywhere in the Milky Way.
That is probably still well above average since the density of stars is much higher towards the core. Likely there are a lot of stars within 50kly.
The kinds of numbers thrown around in the astronomical waste argument are sometimes accused of being a Pascal’s Mugging. Even if one has doubts about whether to work on existential risk reduction, it could be argued that because the Far Future has such overwhelming and immense value that the expected value of working on existential risk outweighs all other opportunities, e.g. near-term altruistic projects like global poverty, global health, and animal welfare.
Having sharper estimates of the potential of the Far Future, bounded by how much of the universe we can actually reach, could help us relate to astronomical waste arguments with far more principle than “aahhh, these are such big numbers!!”
They’re big numbers, but not all numbers are equally big.
The assumption that we can colonize the stars is core to the Astronomical Waste Argument made in favor of working on existential risk reduction. If this assumption is weakened, so is the case for prioritizing work existential risk reduction.
Most things are impossible. Perhaps our belief that we could possible colonize the stars is based only our ignorance. If we actually tried to colonize the stars (or simply tried to actually look into the possibility), we would find that we shouldn’t take it for granted at all that space colonization is a realistic possibility.
Summary of the Astronomical Waste Argument
Nick Bockstrom’s 2003 paper, Astronomical Waste: The Opportunity Cost of Delayed Technological Development:
With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large.
Bostrom arrives at different estimates of the potential number of human minds depending on whether we are satisfied with running “human” minds on computers or wish to stick with biological instantiation.
Using digital instantiation:
As a rough approximation, let us say the Virgo Supercluster contains 10^13 stars. One estimate of the computing power extractable from a star and with an associated planet-sized computational structure, using advanced molecular nanotechnology, is 10^42 operations per second. A typical estimate of the human brain’s processing power is roughly 10^17 operations per second or less. Not much more seems to be needed to simulate the relevant parts of the environment in sufficient detail to enable the simulated minds to have experiences indistinguishable from typical current human experiences. Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.
Using biological instantiation:
Suppose that about 10^10 biological humans could be sustained around an average star. Then the Virgo Supercluster could contain 10^23 biological humans. This corresponds to a loss of potential of over 10^13 potential human lives per second of delayed colonization.
Bostrom clarifies that not only utilitarians should care about this immense potential value which might be reached:
Utilitarians are not the only ones who should strongly oppose astronomical waste. There are many views about what has value that would concur with the assessment that the current rate of wastage constitutes an enormous loss of potential value. For example, we can take a thicker conception of human welfare than commonly supposed by utilitarians (whether of a hedonistic, experientialist, or desire-satisfactionist bent), such as a conception that locates value also in human flourishing, meaningful relationships, noble character, individual expression, aesthetic appreciation, and so forth. So long as the evaluation function is aggregative (does not count one person’s welfare for less just because there are many other persons in existence who also enjoy happy lives) and is not relativized to a particular point in time (no time-discounting), the conclusion will hold.
These conditions can be relaxed further. Even if the welfare function is not perfectly aggregative (perhaps because one component of the good is diversity, the marginal rate of production of which might decline with increasing population size), it can still yield a similar bottom line provided only that at least some significant 5 component of the good is sufficiently aggregative. Similarly, some degree of time discounting future goods could be accommodated without changing the conclusion.
Clearly, the extent to which we can actually colonize star systems beyond our own affects how strong an argument there is from astronomical waste (or as I would rather call it, our astronomical potential). If we can in fact be confident that we can colonize the entire reachable universe, that might be 10^17 stars instead of the 10^13 in just the Virgo Supercluster. An even stronger argument than Bostrom states. On the other hand, if we can’t even colonize beyond our star system, we’re just at 10^0 stars. Then there’d be no astronomical argument at all.
That’s interesting. I agree that given that consideration the term “colonization” is possibly misleading. I have been using it more in the sense of “you have human civilization over there” rather than “the colonies of the kingdom of Britain.” I think I don’t mind if the different “colonies” are autonomous.
Extracting my response from this post.
Self-replicating probes for colonizations could be launched to a fraction of lightspeed using fixed launch systems such as coilguns or quenchguns as (opposed to rockets).
Only six hours of the sun’s energy (3.8x10^26W) are required to commence the colonization of the entire universe.
A future human civilization could easily aspire to this amount of energy.
Since the procedure is conjunction of designs and yet each of the requirements have multiple pathways to implementation, the whole construction is robust.
Humans have generally been quite successful at copying or co-oping nature. We can assume that anything done in the natural world can be done under human control, e.g. self-replicators and AI.
Any task which can be performed can be automated.
It would be ruinously costly to send over a large colonization fleet, and is much more efficient to send over a small payload which builds what is required in situ, i.e. von Neumann probes.
Data storage will not be much an issue.
Example: can fit all the world’s data and upload of everyone in Britain in gram of crystal.
500 tons is a reasonable upper bound for the size of a self-replicating probe.
A replicator with mass of 30 grams would not be unreasonable.
Antimatter annihilation, nuclear fusion, and nuclear fission are all possible rocket types to be used for deceleration.
Processes like magnetic sail, gravitational assist, and “Bussard ramjet” are conceivable and possible, but to be conservative are not relied on.
Nuclear fission reactors could be made 90% efficient. Current reactor designs could reach efficiencies of over 50% of the theoretical maximum.
Any fall-off in fission efficiency results in a dramatic decrease in deceleration potential.
They ignore deceleration caused by the expansion of the universe.
Assume probe is of sturdy enough construction to survive a grenade blad (800kJ)
Redundancy required for a probe to make it to a galaxy is given by R = exp(dAρ ) where is d is distance to be travelled (in comoving coordinate), A is cross-section of the probe, and ρ is the density of dangerous particles.
Dangerous particle size given as a function of speed of the probe by equation in the paper.
From slower probes (80%c and 50%c) redundancy required is low, two probes are enough to ensure one survives.
If you have a 500T replicator, you have more cross-section but also better ability to shield.
Density of matter in space is much higher in interstellar space compared to intergalactic space. Might not be possible to launch universe-colonization directly from our sun.
Dyson spheres are very doable. Assumed to have 1⁄3 efficiencies over sun’s output (3.8x10^26)
We could disassemble Mercury and turn it into a Dyson sphere.
Launch systems could achieve energy efficiency of 50%.
Apart from risks of collision, getting to the further galaxies is as easy as getting to the closest, the only difference is a longer wait between the acceleration and deceleration phases.
Travelling at 50c% there are 116 million galaxies reachable; at 80% there are 762 million galaxies reachable; at 99%c, you get 4.13 billion galaxies.
For reference, there are 100 to 400 billion stars in the Milky Way, and from a quick check it might be reasonable to assume 100 billion is the average galaxy.
The ability to colonize the universe as opposed to just the Milky Way is the difference between ~10^8 stars and ~10^16 or ~10^17 starts. A factor of 100 million.
On a cosmic scale, the cost, time and energy needed to commence a colonization of the entire reachable universe are entirely trivial for an advanced human-like civilization.
Energy costs could be cut by a factor of hundred or thousand by aiming for clusters or superclusters [of galaxies] and spreading out from there.