Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz
Jackson Wagner
Ex-aerospace engineer here! (I used to work at Xona Space Systems, who are working on a satellite constellation to provide a kind of next-gen GPS positioning. I’m also a longtime follower of SpaceX, fan of Kerbal Space Program, etc) Here is a rambling bunch of increasingly off-topic thoughts:
Yup, SpaceX is a big deal:
yup, Spacex is a totally off-the-charts success compared to basically any other aerospace company. (Although maybe historically comparable to the successes of early NASA?) It’s not just that their rockets are good; their Starlink satellites are also very impressive in a variety of ways—basically no other satellite company can match them on cost-vs-capability, the uniquely efficient flat-pack design, etc. And they do other stuff well too, like developing their Dragon spacecraft that certainly does a better job than Boeing’s Starliner or Sierra Nevada’s Dream Chaser.
It’s correct IMO to pay a lot of special attention to SpaceX when analyzing the aerospace industry and even perhaps the big-picture future of space exploration over the next few decades. (I presume you are thinking about SpaceX in the context of researching how space exploration might go in various “AI 2030” singularity scenarios?) Although SpaceX probably isn’t a totally unstoppable juggernaut—it’s totally plausible that Starship might continue to see troubles & delays, while Blue Origin’s “New Glenn” and RocketLab’s “Neutron” and other rockets might manage to beat expectations and scale up quickly, creating a more competitive world rather than a monopolistic Starship-fueled continuation of the famed “SpaceX steamroller”.
“SpaceX brought costs down by an OOM already and this unlocked Starlink already”—yeah, people don’t realize that Starlink constitutes 75% of all satellites in orbit (8,800 / 11,700). This is maybe not a totally fair comparison insofar as Starlink satellites are a little smaller (and in lower-energy orbits) than the big honking GEO satellites of yore, but still—in a certain sense, Starlink versus all the traditional satellite industries is a little bit like Uber versus the taxi market. It’s not just that SpaceX has captured a large percentage of the preexisting launch market; they’ve made the market way bigger.
Why are Ariane & other legacy launch companies even still alive, lol:
To your question of “why are ULA / Ariane still getting business; is this just nepotism / corruption?”—I think the situation is more accurately described in terms of national security concerns.
For Ariane, national governments want to maintain sovereign access to space—Germany, the UK, etc, don’t want to have to hand over their spy satellites to Russia or America or any other major powers for launch! But the “European military/intelligence satellite launches” industry isn’t big enough to really sustain an entire launch company like Ariane. So, Europe pressures its own commercial satellite companies (including a lot of the broadcast & communication companies operating GEO satellites, who have always launched on Ariane all through the 1980s / 90s / 00s when Ariane really was the cheapest and best option) to keep buying Ariane contracts so there’s enough European launches happening to support a European launch company. (The pressure / implied threat being that if those GEO satellite operators defect to launching on SpaceX, Europe might cut them off from contracts / subsidies / whatever other kind of industrial-policy support they’re currently providing.)
One reason why this works alright is that rocket launches are often cheaper (like $100m - $200m) than the satellites they’re launching (which can be many hundreds of millions for GEO commsats, or billions for fancy military / science missions). So the rocket launch is only a minority of the overall cost.
In ULA’s case, they are mostly propped up by the Pentagon being (reasonably IMO) concerned that they don’t want American space launch to become a monopoly because then SpaceX could charge very high prices, so they do stuff like giving 60% of their launches to SpaceX and 40% to ULA according to a big contracting process. In the future, Blue Origin might surpass ULA and mostly take over their role in the industry.
The continued existence of SLS is totally just corruption though, lol… (combined with extreme bureaucratic inertia, unwillingness to do proper decisionmaking under uncertainty / take certain perceived risks, while ignoring the risks like that you might spend tens of billions of dollars just to develop a way-more-expensive-than-the-competition rocket...)
Also, some satellite-constellation companies that think they’re “competing with SpaceX” (more like losing to SpaceX, amirite?) refuse to launch on SpaceX vehicles. Mostly I’m thinking of Amazon’s Kuiper satellite internet constellation (which wants to mostly launch on Bezos-owned Blue Origin), and the european-ish OneWeb. Also, like a more extreme version of the situation with Europe and Ariane, obviously China doesn’t let Chinese companies just buy Falcon 9 launches.
Another important factor in “why is anybody still buying these expensive-ass non-SpaceX rockets??” is that the DID prefer buying SpaceX rockets, but then SpaceX raised their prices (from $70m some years ago to about $100m today, IIRC), and then SpaceX got booked solid and ran out of rockets (despite their impressive scaling over the years), so if you’re in a big hurry to launch soon, you need to start looking at other more expensive companies (indeed, even many of these companies are booked out for many years, scaling up as fast as they can manage, etc).
Will Starship make a house in space cheaper than a house in SF? No:
House-in-Berkeley versus house-in-space is of course a weird comparison, but I very much doubt Starship could singlehandedly make it cheaper to live in space even if we used all Starship capacity for building a giant space station. An orbital space station needs a lot of complex expensive stuff to make it work (thrusters, momentum wheels, batteries, solar panels, life-support equipment for recycling water and air), plus stuff in space breaks down a lot more quickly than stuff on Earth which would increase the cost through faster depreciation. (The ISS is made of fancy aluminum pressure vessels and micrometeoroid shielding and stuff, but—despite the fact that its 7-person crew spends a huge portion of their time doing fixes & maintenance—it’s springing all kinds of weird leaks and is gonna have to be deorbited soon, even though most of the station is less than 25 years old. Contrast this with the house where I live in Colorado, built a whopping 35 years ago, which still basically does fine with just minimal home maintenance, occasional new appliances, etc.)
Plus obviously your house will need lots of supplies (food, amazon packages, but also stuff like air and thruster fuel), and transporting these supplies into orbit will be much more expensive than going to the grocery store in Berkeley.
Obviously if it was just one little house in space, then it would be SUPER expensive (since you’d need all those subsystems just for your one little house) and there would be no feasible way to do regular (like monthly) deliveries since you don’t eat a Starship full of groceries every month. But what I’m saying is that it would still be expensive even if you wanted to save money by aggregating all the houses together into one giant space station to cut down on subsystem & resupply costs.
Perhaps a more interesting point: the reason why Berkeley is expensive is because the land is expensive. But as launch gets cheaper and cheaper thanks to Starlink, the most valuable orbits will start becoming very crowded, and we’ll probably start charging for them. Right now, spots in orbit are basically given away for free (although before you launch, you’ve gotta get an FCC license to operate your satellites, which is a paperwork-intensive process, almost like the space version of getting a pharma drug approved by the FDA). But in the future, I suspect we’ll probably implement some kind of “space Georgism” to prevent kessler syndrome and properly allocate the most valuable orbital slots. (Where “we” is ideally some kind of international agreement, but in practice will probably just be, like, the USA’s department of commerce, and then China does their own similar thing, and no other country launches enough satellites to be relevant.) Under such a system, valuable spots in orbit might be auctioned off a la elaborate electromagnetic spectrum auctions. So, if you want to live in Space-Berkeley (a valuable, crowded orbit like sun-synchronous LEO), most of your cost might soon be space-land (some complicated notion of orbital crowdedness + making credible promises to maneuver around debris and de-orbit your satellite at the end of its scheduled lifetime) instead of just the construction cost. Unless you want to live in some random radiation-filled MEO orbit not really useful for anything, like Space-Rural-Oklahoma.
Will it really be cheaper to build factories in space?? Probably not pre-ASI, but possibly, idk:
You’re probably right to focus on “high cost-of-kg” operations as things that are most likely to be done in space. Lots of people talk about this dumb zombie idea of putting solar panels in space, even though it has only become less sensible over time. People are like “omg, space launch is cheaper now, maybe now it finally makes sense to implement the techno-optimist 1970s dream of solving the oil shock by putting solar panels in orbit!!” But solar panels have gotten cheaper much faster than space launch has gotten cheaper, so the trend is actually in the other direction—don’t even bother mounting the panels on a basic single-axis tracking system to follow the sun over the course of the day; just drop them directly on the freaking dirt to save on installation + mounting costs.
Obviously in an ASI-singularity scenario (or even, just, the long-term trajectory of a non-AI human civilization growing at 2% per year), we are eventually going to use up all the land, and then the natural next thing to do is to start launching lots of solar panels into space. But it doesn’t make much sense to start doing this now.
I doubt that factories is a winning idea either:
Factories are usually defined by needing lots of input material and producing lots of output material. Shipping this stuff to space and back would be expensive, so it only makes sense IMO if either the inputs are coming from space already, or the outputs are destined to stay in space.
Working in zero gravity + vacuum tends to make most things more difficult, not easier. Lots of factory processes designed on Earth will break in space. So, doing anything with moving parts in space is probably way more of a hassle than doing the same thing on earth, unless there’s some amazing special advantage to working in vacuum or zero-gravity. Some proposed special advantages I’ve heard mentioned:
People used to talk about doing pharma research in space, because proteins crystallize much more easily in zero gravity?? But I think the reason people were so hyped about crystallizing proteins is because we hadn’t solved the protein folding problem yet! (You can work out the folded structure of individual proteins by exhaustively studying protein crystals.) Now that we have AlphaFold, I think that use-case has sailed...
Nowadays people talk about doing semiconductor manufacturing in space, on the grounds that semiconductor manufacturing is extremely afraid of dust (so it might actually help to do in vacuum), and the machines are all so high-precision that they might as well be aerospace-grade anyways. Maybe there’s something to this idea?? But if you need vacuum so much, you could probably just build a vacuum-sealed assembly line, or even build an entire vacuum-sealed wing of the TSMC factory (with employees walking around in pressure suits and everything) for cheaper than building an orbital factory. (The vacuum quality of low-earth-orbit isn’t even especially great compared to what you can get pretty easily on the ground!) Semiconductor manufacturing is infamously one of the most difficult, complicated things that human civilization does; I’d be surprised if you could just move it all into space without a million little things going wrong.
Something about optical fibers, carbon nanotubes, and other advanced materials potentially being easier to manufacture in low gravity?? I don’t know much about this—mostly I’m just remembering the plot of Andy Weir’s book “Artemis” and hoping that the optical-fiber McGuffin plot-point was based on plausible background research. You could imagine an AI angle here too, if we need tons of super-high-quality optical fiber to make the interconnections between our vast datacenters full of TPUs or photonic chips or however we multiply matrices in the year 2040.
One entertaining niche application of space manufacturing is to produce “extinct polymorphs”—chemicals like the HIV drug Ritonavir, which once were easily manufactured on earth but have since become nearly-impossible to create, thanks to a bizarre ice-nine-style process where they get “infected” by misfolded versions of the same molecule! Varda Space is an aerospace startup which actually produced some Ritonavir in space precisely to make this point. But I hardly expect “bringing back extinct polymorphs” to become a major portion of GDP in the future; it seems intrinsically niche. (Barring, perhaps, some mirror-life related catastrophe such that we are only able to grow crops and preserve natural necosystems in pristine space environments, a la the sci-fi stories Interstellar, Silent Running, and Speaker for the Dead.)
If you have AGI / ASI, then maybe you can simply have the AI redesign all your manufacturing processes from first-principles to work well in the space environment. Maybe in some objective sense, space is actually a better place to do most manufacturing! But in that case you do need the AGI, and you also need some time to bootstrap the entire alternate manufacturing ecosystem. This might face some of the same pros & cons as Carl Feynman’s concept of creating an alternate manufacturing system of self-replicating automated/miniaturized machine shops (see my comments here), although of course I’d expect a true ASI to power through the various troublesome issues of transitioning over to a whole new industrial base pretty quickly.
Putting datacenters into space is a little more plausible IMO, because you don’t have to worry about tons of moving parts and manufacturing processes, and your input is just energy while your output is just heat + information.
But you do need energy, which you can either beam up from earth via some kind of microwave laser (but this hasn’t been tested IRL, has some pretty serious efficiency losses even just in theory, etc), or manufacture locally with solar panels or nuclear power (but this is gonna drag down your cost-per-kg launching random solar panels).
I will say that, compared to the idea of space solar power, space-datacenters is a big improvement because you’re increasing the cost-per-kg of what you’re launching (all those GPUs), and instead of beaming back lossy microwave radiation energy, you’re beaming back information, which seems easier.
But I’m doubtful that you could do AI training in space very easily, since you’d have to formation-fly a ton of satellites close together (thus spending a lot of fuel?? unless you want to do something ridiculous and experimental like electrostatic-based formation flying), connect them all with extremely high-bandwidth laser links (dunno how this compares to the bandwidth Starlink already achieves...), etc. My impression is that if you don’t have high-bandwidth interconnects, you’re probably limited to AI inference instead of training? (idk that much about AI training though...). I’d also be worried that both training and inference would require lots of data to be transmitted to and from the ground, except then I remembered that the whole point of Starlink is to put the whole planet’s internet infrastructure in space, and it seems to be working fine—so at least bandwidth won’t be a problem!
And you do need to get rid of all the heat, which in some ways gets a lot harder in space (there’s no air to do convection, nor a ready supply of cheap water), although in some ways it gets easier (space is really cold, so you can cool down by just blocking the sun with a big mylar mirror and radiating in every other direction).
I’m not sure, but I’d be worried that radiating heat doesn’t scale well (due to square-cube-law) while piping cold water around scales better. (Until, of course, you cover the entire planet in datacenters and solar panels and fusion reactors, melt the icecaps and boil the oceans, and you’re forced to resort to radiative cooling because you’ve run out of places to convect to!)
And this is very different from the berkeley house case, because (at least at the moment) there’s still a vast amount of basically dirt-cheap useless desert land on which to build datacenters, power infrastructure, etc. People complain about regulation and permitting, but:
Space will also feature regulation & permitting obstacles, around things like space-debris mitigation, electromagnetic spectrum for beaming vast amounts of information back to earth (unless you can figure out space-to-ground laser comms, perhaps?), and power generation. Whether you are launching nuclear reactors into space, constructing gigawatt-scale microwave lasers that hostile superpowers will perceive as anti-ballistic-missile defenses, or even just building a mass-catapult on the moon so you can hurl hundreds of tons of lunar-manufactured solar panels towards the Earth, somebody is probably going to want to submit some comments during the 30-day public notice period...
My impression is that datacenters could route around much of the most severe permitting issues (such as around transmission lines and energy interconnect queues) if they were willing to go off the grid and build all their own power + battery storage. Datacenter builders don’t want to do this because that’s more expensive. But building datacenters in space also means going off-grid, plus you have to do a ton of other stuff! (Yes, I get that space-based solar is 4x more effective and cuts down on your need for batteries, but space launch isn’t the only thing allowed to reduce in price over time—batteries and other kinds of energy-storage technology are also getting cheaper all the time.)
So, what are we gonna use all those Starships for, if not in-space manufacturing or datacenters??
Right now, the biggest and most-valuable use of space is for communications (like Starlink internet, but also military communications, television broadcasts from GEO, specialized connections to airplanes and ships, etc), navigation (like GPS), taking photos of the earth (mostly for military intel, but this also has applications in agriculture, finance, etc), and assorted military applications. So, in the immediate future, I’d expect us to just keep massively scaling those applications rather than using Starship to do wacky new stuff:
You kind of only need one navigation constellation (although it’s due for an upgrade, re: Xona’s plan), so this won’t be a huge number of satellites.
With satellite internet, more & bigger satellites = more internet bandwidth, so I’d expect this to grow a lot. Claude says that Starlink today only represents “perhaps 1–3% of international backbone capacity—and international traffic is itself only a subset of total internet traffic”. Surely it would be profitable to scale this up such that satellites are providing as much internet bandwidth as all existing ground-based infrastructure combined (ie, +100% instead of +2%), and we could still find reasonably productive ways to use that bandwidth? So that would be like 50x all the Starlink satellites that have been launched so far.
So far there have been over 350 Falcon 9 launches full of Starlinks (and US-military-branded Starshields) -- assume that you want to launch 50x that amount, but also that Starship can launch 5x as much mass (90 tons instead of 17.5 tons). That comes out to 3,500 Starship launches to achieve that goal. If you look at the 11 launches they’ve done so far, pretend they were all successful and all happened this year (they were not and did not), and assume they’ll double their number of launches each year (22 in 2026, 44 in 2027...) and launch nothing but Starlinks (NASA will want a word with them about their Artemis lunar mission contract, which involves up to 20 starship launches per moon visit...), it would take them eight years to clear that backlog.
Taking photos of the earth probably scales more than navigation but less than internet—Planet’s fleet of cubesats already take 3m-resolution photos of the whole earth every day. How much more valuable would it be to have 30cm-resolution photos every hour? Or continuous video?? idk, but maybe once adding more satellite internet capacity hits diminishing returns, this becomes the next most valuable thing to scale up.
Earth-observation is also probably the place where it makes the most sense to be putting GPUs in space right away, since the satellites are already very bottlenecked on bandwidth for beaming images down to earth. If you put a GPU on all your spy satellites, you could do image-classification analysis locally (and immediately!) and only beam down the most interesting stuff. Plus you could maybe even make dynamic decisions like “oh, that’s really interesting, let’s take some more photos of that spot”—usually these kinds of decisions are delayed by the time it takes for a satellite to pass over a ground station, download an image, get the image analyzed, and then for commands to be uploaded later, so it might be a big deal to make these decisions locally & immediately.
Military expenses are dictated by adversarial / arms-race logic, so the amount we’ll spend on military stuff in space is perhaps kind of a wildcard driven by how intense the overall military competition with China gets, multiplied by how many advantageous military-stuff-in-space ideas we can dream up.
Are there any huge economic markets (besides already-discussed manufacturing, solar power generation, and datacenters) that might open up besides orbiting satellites beaming down information?
At some point, maybe asteroid mining becomes a thing? This has all the disadvantages of “moving parts in space”, plus you’re dealing with very messy inputs, but it has the advantage of the fact that certain asteroids have very high concentrations of metals that are rare on earth.
My guess is that doing an expensive space mission to bring back very valuable, rare metals (like gold, iridum, etc) is a better business plan than the zombie idea of doing an expensive space mission just to launch solar panels and simply beam back power via lossy microwave laser—beating dirt-cheap earth-based solar panels sounds impossible, but beating earth-based mining sounds a little less impossible. The business case for “solar power in space” probably only closes once you have finished covering all of earth’s deserts in solar panels. The business case for asteroid mining probably closes earlier, though I bet you’d still need pretty immense scale (like “over the next decade we’re aiming to bring back an amount of gold equivalent to 10% of all gold ever mined in history”) to amortize the huge cost of a gigantic deep-space mission and bring the whole project below the cost of earth-based mining.
An even better business plan would be if we could manufacture something even more difficult to make on earth (maybe semiconductors, optical fibers, or some other advanced material??), which would probably also have to be very high value-per-kg (to minimize the cost of transporting the inputs and outputs). But who knows if we can actually figure out anything that fits that criteria. And whatever we figure out might not be scalable to trillions of dollars (like, Ritonavir definitely isn’t) in the way that asteroid mining clearly is.
You can also mine asteroids to melt ice and make hydrogen + oxygen rocket fuel, but of course this requires some customer who’ll buy a lot of rocket-fuel for going beyond low-earth orbit (like colonizing mars?) or who has other reasons to be maneuvering all the time (like military satellites that want to constantly change their orbit to stay unpredictable?).
Melting ice is probably a lot easier than processing ore, so probably the first demonstration asteroid-mining missions are about water. But to scale up, they’d need customers (and customers high above low-earth-orbit, since their product has to be cheaper than just launching extra rocket fuel on Starships!).
Other than satellites, I think space-based industry & exploration is going to be heavily debt-financed in a really big way for a really long time.
Satellites are profitable and normal; we are obviously gonna blot out the sky with immense numbers of very large, high-powered, super-Starlink satellites (mostly for internet, also for taking photos of the earth).
But asteroid mining maybe only pays off once you scale up to some preposterous level, like bringing back a trillion dollars’ worth of gold.
I am recalling all those debates about whether Amazon or Uber were really sustainable businesses—they’re pouring billions of dollars of investors’ cash on infrastructure build-out or user subsidies; are they REALLY gonna flip to profitability and turn this all around someday?? Or the current debates about the even vastly-larger sums being invested in datacenters for training AI models. If asteroid mining for precious metals ever happens, it is gonna be that kind of situation all over again.
What about using Starship for its intended purpose of settling Mars??? IMO, nobody has thought up any plausible reason to think that a Mars city would ever return significant capital—settling another planet would be a gigantic money-sink basically forever. Yet, in some abstract sense it seems obviously likely to be worthwhile (in terms of cultural influence on humanity’s future, if not literal investment returns) to be at the forefront of colonizing the solar system! In this respect it feels similar to some parts of European colonial history—what was the ROI of Britain starting colonies in North America? In a certain sense, somewhere between low (you spend decades building it up, finally get a little bit of stamp tax, and then they go and fight a revolution against you) and extremely negative (start a Jamestown or Roanoke, almost everybody dies, then the town straggles on pointlessly for decades, consuming resupply ships but not figuring out anything to export). But in another sense, extremely high (insofar as Britain got to put their thumbprint on what later became the mighty USA).
So this is kind of like the asteroid-mining issue or datacenter-buildout issue, but on steroids—a venture so vast and so uncertain (a lot of European colonial empires, like Germany’s scramble for Africa, were bad ideas that paid off neither literally nor metaphorically!) and requiring so much debt-financing that it leaves the realm of traditional investing, or even the realm of economic booms / bubbles and instead has to be coordinated through the mechanism of national/societal greatness, competition, and prestige.
But even though doing Mars settlement is way more expensive and speculative and uncertain than even doing asteroid mining, the perceived expected-value (or perceived cost of missing out) might be higher. So I think it’s actually likely that we choose to do something closer to satellites + mars colonies, rather than satellites + in-space manufacturing/mining.
It’s also worth noting that Starship has been explicitly designed for settling Mars (the methane fuel, the Space-Shuttle-like upper stage that would be a reusable SSTO on Mars, etc). Sending many Starships to Mars and back (where they can hope to use aerobraking for landing, and locally-manufactured fuel for launch back to earth) is probably much cheaper than sending the same number to the moon and back!
And in a similar way, a LOT of potential space activities are more like Mars colonization than satellite internet—“squatting on areas that might eventually be profitable someday in the future” rather than making actual profit today.
As mentioned, asteroid mining is like this—the first company to develop the tech and visit some of the asteroids might do this in the hope of kinda claiming & squatting the opportunity, far in advance of the opportunity actually becoming profitable.
Things like putting datacenters in space or putting solar panels in space also have something like this vibe, insofar as eventually it seems we will want to do them. But what’s the scarce resource being squatted?? With Mars or asteroids, you’re hoping to cheaply stake a claim (in the sense of legal rights, precedent, etc) on scarce physical land / ore in the hopes of expensively developing / mining it later. But launching solar panels & datacenters looks more like “expensively doing something now in the hopes of profiting in the far future”. You should rather aim to be cheaply squatting something now. Maybe this would be:
Space-Berkeley slots in low-earth-orbit?? (but if LEO becomes well-governed via Fully Automated Luxury Space Georgism, this plan is not gonna work out for you...)
Being the debt-fueled leader in some industry (like how SpaceX is the leader in launch), where the industry might 100x in size (and actually become profitable) in the future. Here you’d either be hoarding a technological advantage (doing trial missions to develop your in-space manufacturing processes, but not actually launching a lot since you lose money on each mission), or (more expensively) hoarding an industrial-capacity advantage.
If you believe in AGI right around the corner, this makes trying to squat “space-based power generation / datacenters / semiconductor manufacturing” more appealing, since the applications are more concrete and if the singularity is about to happen then you don’t actually have to wait very many years paying interest on your debt.
But on the other hand, if AGI is right around the corner, surely there are tons of more-profitable things to do here on earth? Like try to invest in humanoid robots, or do normal AI investments in TSMC / NVDA, or etc?
Basically, if you are playing this game of trying to squat the opportunity to do far-future space expansion, then you wanna put yourself in the best possible position to be at the forefront of a grabby-aliens-style expansion into the solar system (from where, presumably, you can steamroll onwards to the galaxy).
But it’s unclear exactly what bundle of technologies / legal claims / industrial capacity / etc will actually be needed for this. (Will controlling a small town on Mars be relevant in any possible way if an ASI singularity occurs in 2050?? Probably not!)
And in particular I’d be very worried that I’d spend all this time going into debt trying to develop a clever portfolio of space-related industrial / technological capabilities, only to get instantly lapped by ASI right off the starting block. Such that maybe the only resource really worth scrambling for is simply “access to ASI”. (Plus possibly launch capacity itself, which is a big capital-intensive heavy-industry that seems less amenable to being lapped software-only-singularity style than something more intricate and design-intensive, like space-based manufacturing equipment or satellites.)
But most people are less ASI-pilled. So, probably we start launching colonization rockets to Mars (and funding lots of doomed little “datacenter in space” / “solar power in space” / “bitcoin in space” startups) anyways.
dunno! some speculation:
You do have to attach a pretty sizeable antenna to the top of your plane, plus whatever accompanying wiring is necessary… maybe maintenance capacity is the bottleneck? It’s a little hard to imagine that airlines are bottlenecked by this, since it seems pretty minor compared to other kinds of maintenance planes commonly undergo (like swapping out an engine)? But quotes from this site saying that some airline “hopes to have units installed in at least 25% of their aircraft by the end of 2025”, or that another “expects to ramp that number up to 40 installations per month” suggest that maybe this is the reason why airlines like United, Hawaiian, etc (which have started but not completed their rollouts) aren’t yet at 100%.
maybe starlink has some kind of interconnection queue where they can only ramp up so many users at a time?? but I’d expect that stuff like airlines and cruise ships would be relatively high-paying customers at the front of the line, at least compared to ordinary consumers (who can currently order starlink antennas online for next-day shipping).
probably the airlines themselves are not that motivated to instantly upgrade their fleets, since most people don’t choose flights based on who has the fastest wifi? in a similar way, other in-flight amenities—legroom, seat material, the quality of meals on international flights, how good the little screen for in-flight movies is, etc, are individually not super-important to people; most important is the flight route + flight timing + ticket price.
especially when you consider the fact that Starlink has a monopoly, and is probably charging airlines a profit-maximizing price, meaning that airlines which adopt the new service might not actually see any additional revenue on net even if they can charge slightly higher ticket prices once they have fast wifi. Other airlines are perhaps thinking they should wait until more satellite-internet constellations (like the aforementioned project Kuiper) get off the ground and prices come down?
maybe some budget airlines like Frontier or RyanAir calculate that most of their passengers are cheapskates who wouldn’t pay for fast wifi (either directly or through higher ticket prices)
it does kinda seem weird, though, that this list of airlines doing / considering starlink upgrades doesn’t even contain some of the US’s biggest airlines, like Southwest, Delta, or American. I’d bet they’re maybe waiting for lower prices, but it’s always possible they’re just asleep at the wheel.
presumably because to improve airplane wifi, you’d need to launch dozens of rockets to deliver a massive new constellation of orbiting satellites in order to deliver an order-of-magnitude improvement over Intelsat or whoever usually provides wifi connections to planes.
The good news is that SpaceX has done this, with their Starlink constellation! (Others like OneWeb, Baidu, and Amazon’s Project Kuiper are also doing similar stuff.) But not every airline / airplane has upgraded to new Starlink recievers yet. So, most planes (and cruise ships, and etc) still have slow Intelsat/Globalstar internet, but others have indeed seen huge upgrades in internet speeds.
But this would make it sound too much like AI-related philanthropy is all they do...
“Coefficient Giving sounds bad while OpenPhil sounded cool and snappy.”—OpenPhil just sounds better because it’s shorter. I imagine that instead of saying the full name, Coefficient Giving will soon acquire some similar sort of nickname—probably people will just say “Coefficient”, which sounds kinda cool IMO. I could also picture people writing “Coeff” as shorthand, although it would be weird to try and say “Coeff” out loud.
This is an inspiring post, so now I’d like to imitate some of the stuff you’ve done! I’d love it if you could post some pictures of what the installed RGB strips look like, in particular. I’m interested in setting up a bunch of smart-home lighting, and these light strips sound pretty cool, but it’s hard for me to picture how exactly these are mounted and how they look when installed.
Do you just, like, double-sided-tape the LED strip to the walls? Or use clips of some kind? How do you install a “diffuser” or “make them face the walls” to spread out the light? How does this all look once installed? It would be great to have amazon links to everything you use for mounting / diffusing / etc. I’d love an “here’s everything you’ll need” list like what’s included in several of the classic posts about “Lumenator” design.
How does it get connected up to power? Presumably you’re running these LED strips along the edges of the ceiling, right? So the wires probably come down in a corner of the room, where they meet the “24V 200W power supply adapter”, which in turn plugs into the wall? How many strips can you power from one power-supply adapter?
Maybe there’s a particular video tutorial (this guy seems to have a whole channel about installing LED light strips, for example) that you used, which covers all these questions pretty well?
[insert joke about how publishing fresh JVN tokens will accelerate AGI timelines]
Another potential reason to sell sooner rather than later:
Right now, there are multiple frontier AI labs (OpenAI, Anthropic, Deepmind, plus frontier-ish groups like Meta, X, etc).
But in the future, one of these labs, or maybe even some other group (like SSI, DeepSeek, etc) might “run away with it”, developing a commanding lead after they stumble onto the next big algorithmic breakthrough / paradigm shift / etc.
So, you might want to sell your shares now while things are still in a state of uncertainty, rather than gamble that your company will stay in the lead (or nearly-in-the-lead) forever.
There might even be an important mission-hedging aspect here? ie, if you think Anthropic is the most responsible AI company, then in worlds where they become #1, your donations are less important since they’ll hopefully be trying hard to do the right thing already. Versus if Meta or Grok or a Chinese company surges ahead, you might end up wishing that you had spent more of your Anthropic money earlier to try and influence AI regulation / international agreements / etc!
I think you are missing out on a key second half to this story, which would make your motivational take at the end (“uh.. feel good about yourself for trying or something?”) a lot stronger:
When you go to a ski resort, or a gym, or etc, it’s not JUST that you only see the people who ski, work out, etc, while not seeing the 90% who don’t do that activity. You see people WEIGHTED by the AMOUNT OF TIME they spend doing that activity, which skews heavily towards the most intense practitioners.
For example, suppose your local gym has 21 patrons:
- 7 have lapsed in their actual workout habit; they never show up to the gym even though they keep getting auto-charged the monthly fee because they’ve forgotten to cancel their membership.
- 7 manage to keep up a healthy but not outstanding workout habit—they each manage to do a one-hour workout once a week
- 7 are total gym bros who get in a one-hour workout every single day, stacking those gainz
On a typical day, who visits the gym?
- zero of the lapsed members
- on average, just one of the once-a-week members (7 * 1⁄7 = 1)
- all seven of the hardcore gym rats
So, it’s not just that you never see the lapsed members (or the people who never signed up in the first place). It’s also that you get an extremely skewed view of who “goes to the gym”—visiting the gym and looking around makes it seem like the clientele is 87.5% hardcore gym rats, when the true proportion is actually just 50%. (Albeit that 87.5% of the “total time spent in the gym” is spent by gym rats.)
You even mention this in some of your anecdotes, like “most of the other riders have been out 90 days just this season”.
For me, this fact is heartening. For something like a gym or a ski resort (or the blog posts in your feed), comparing yourself to the people you see around you is actually setting a really high bar, since the people you see around you are weighted by the time they spend doing the activity (and/or by the number of posts they write). A gentler, more intermediate basis of comparison is to all the people who do “go skiing”, but don’t go every day—the huge shadow mass of people who ski a couple times a year, whose population is probably way higher than the “I have a cabin next to the resort and buy the season pass every winter” contingent, but who are in the minority every day on the mountain.
Interesting thought, Thomas. Although I agree with RussellThor that it seems like doing something along the lines of “just jitter the position of the RV using little retrofitted fins / airbrakes” might be enough to defeat your essentially “pre-positioned / stationary interceptors”. (Not literally stationary, but it is as if they are stationary given that they aren’t very maneuverable relative to the speed of the incoming RV, and targeted only based on projected RV trajectories calculated several minutes earlier.)
(Is the already-existing atmospheric turbulence already enough to make this plan problematic, even with zero retrofitting? The circular-error-probable of the most accurate ICBMs is around 100 meters; presumably the vast majority of this uncertainty is locked in during the initial launch into space. But if atmospheric drag during reentry is contributing even a couple of those meters of error, that could be a problem for “stationary interceptors”.)
Failing all else, I suppose an attacker could also go with Russell’s hilarious “nuke your way through the atmosphere” concept, although this does at least start to favor the defender (if you call it favorable to have hundreds of nukes go off in the air above your country, lol) insofar as the attacker is forced to expend some warheads just punching a hole through the missile defense—a kind of “reverse MIRV” effect.
Regardless, you still face the geography problem, where you have to cover the entire USA with Patriot missile batteries just to defend against a single ICBM (which can choose to aim anywhere).
I would also worry that “in the limit of perfect sensing” elides the fact that you don’t JUST have to worry about getting such good sensing that you can pin down an RV’s trajectory to within, like, less than a meter? (In order to place a completely dumb interceptor EXACTLY in the RV’s path. Or maybe a few tens of meters, if you’re able to put some sensors onto your cheap interceptor without raising the price too much, and make use of what little maneuverability you have versus the RV?) You ALSO have to worry about distinguishing real warheads from fake decoys, right? Sorting out the decoys from the warheads might be even harder than exactly pinning down an RV’s trajectory.According to a random redditor, apparently today’s decoys are “inflatable balloons for exoatmospheric use and small darts for in-atmosphere”, plus “radio jammers, chafe, and other things designed to confuse enemy detection. With better and better sensing, maybe you could force an attacker to up their decoy game, retrofitting their missiles to use fewer, more lifelike decoys, maybe even to such an extreme extent that it’s no longer really worth using decoys at all, compared to just putting more MIRV warheads on each missile? But if decoys still work, then you need that many more interceptors.
“In the limit of perfect sensors” (and also perfectly affordable sensors), with perfect interception (against non-retrofitted, non-HGV missiles) and perfect decoy discrimination, I suppose it becomes a defense-economics balance where you are hoping that the cost of lots of small rockets is cheaper than the cost of the attacking ICBM system. These small rockets don’t have to be super fast, don’t have to go super high, and don’t have to be super maneuverable. But they do have to be precise enough to maneuver to an exact precalculated location, and you need enough of them to blanket essentially your entire country (or at least all the important cities). You are basically relying on the fact that the ICBM has to be super big and heavy to launch a significant payload all the way around the earth, while the numerous small missiles only have to fly a couple of kilometers into the sky.
Final, dumbest thought saved for last:
Aside from developing HGVs, couldn’t the ICBMs in theory overcome this defense with brute scale, by MIRV-ing to an an absurd degree? How many warheads can dance on the head of a Starship? Could you just put the entire US nuclear arsenal on a single launcher? The cost of your ICBM would essentially be zero when amortized over all those warheads, so the defense economics battle just becomes the cost of warheads vs patriots, instead of entire ICBMs vs patriots. Obviously there are many reasons why this idea is nuts:I’m not sure how far apart different MIRVs can land, so this technique might be limited to attacking individual cities / clusters of silos.
Of course if you put your entire arsenal on one launcher, then your enemy will immediately forget about interceptors and spend all their efforts trying to sabatoge your launcher.
Going for a strategy of “we can mass-manufacture warheads cheaper than you can possibly intercept them” would quickly lead to absurd, overkill numbers of warheads that would dwarf even the most insane years of the Cold War, practically guaranteeing that any actual nuclear war would end in the classic “nobody wins” scenario of global devastation (nuclear winter, etc).
But I thought it was kind of funny to think about, and this absurd thought experiment maybe sheds some light on the underlying dynamics of the situation.
IUCN numbers are a decent starting point, although as Shankar notes, the IUCN is too conservative in the sense that they wait a long time before finally conceding a species has gone fully extinct.
For megafauna extinctions during the Ice Age, wikipedia tallies up 168 lost species. (On the one hand, maybe not literally all of these were due to humans. But on the other hand, our record of creatures living 12,000 years ago is probably pretty spotty compared to today, so we might be missing a bunch!) https://en.m.wikipedia.org/wiki/Late_Pleistocene_extinctions
Another difficult aspect of trying to estimate “how many species have gone extinct” is that we have lots of detailed information about mammals and birds, okay info about reptiles, fish, etc, but then MUCH spottier information about insects, stuff like plankton or bacteria, etc. And while there are about 6000 mammal and 11,000 bird species worldwide, there are maybe somewhere around a million species of insect, and who knows how many types of bacteria / plankton / whatever.
https://ourworldindata.org/how-many-species-are-there
https://ourworldindata.org/grapher/share-of-species-evaluated-iucn
(For instance, the IUCN data on actual confirmed extinctions has a pretty different ranking than their list of at-risk endangered species: )
So for mammals + birds and a few other groups, you can get a pretty definitive estimate of “what percentage of species have gone extinct in the last few hundred years”. But if you are asking about ALL species, then your answer almost entirely depends on how you choose to extrapolate the well-documented mammal/bird rates to much larger groups like insects, plants, fungi, and bacteria (and indeed, weird activists will extrapolate in stupid, biased ways all the time). It’s not straightforward to figure out how to do this right, since different kinds of life have different rates of extinction—plants, for instance, seem to almost never go extinct compared to animals. Birds seem more robust than mammals to the effects of human civilization (since they can fly, they’re less immediately-screwed than land-dwelling creatures, when humans break up previously homogenous environments into isolated patches of habitat separated by roads, fences, farmland, etc). Meanwhile, amphibians’ absorbent skin and pretty specialized habitat needs make them especially vulnerable to extinction via pollution and habitat disruption. Then there are creatures like hard corals where it’s like “if the ocean becomes X amount more acidic then they basically all die at once”. So… are insects resilient like plants, or extra vulnerable like amphibians or corals? Personally I have no idea. Are thousands of bacteria species going extinct all the time for weird chemical or micro-ecological reasons that we barely understand? Or would they just cruise right through even a worst-case meteor strike while hardly batting an eyelid? Hard to say; too bad they make up maybe 90% of all species!
You could try to construct some weighting scheme to reflect the intuition that obviously losing a species of tiger or elephant is worse than losing one of 10,000 small indistinguishable beetles. Aside from the difficulty of operationalizing “how much do i care about each species” (physical size? squared neuron count? upweight mammals vs otherwise-equal birds because they’re more closely related to ourselves? or should birds get extra points for being colorful and pretty?), such a project also runs into a lot of interesting questions about phylogenetics and evolutionary distinctiveness. (Two “different” “species” that actually only differ by a few mutations, IMO is unfair double-counting; they’re basically just one species and shouldn’t be entitled to double the conservation effort. Meanwhile, very unique species that have been evolving along their own unusual track for millions of years seem like they should get credit for punching above their weight in terms of maintaining earth’s diversity of life. Some species also contain within themselves lots of interesting variation and distinct sub-populations, while other species are more of a homogenous monoculture. et cetera.)
The above info is all a rough and probably misunderstood paraphrase of stuff my wife has told me over the years; she runs the “Ecoresilience Initiative”, a nascent sort of EA-for-biodiversity research group. Email her there if you want to learn more!
https://ecoresilience.weebly.com/
My thoughts about the story, perhaps interesting to any future reader trying to decipher the “mysterianism” here, a la these two analyses of sci-fi short stories, or my own attempts at exegeses of videogames like Braid or The Witness. Consider the following also as a token of thanks for all the enjoyment I’ve recieved from reading Gwern’s various analyses over the years.
“Clarity didn’t work, trying mysterianism” is the title of a short story by Scott Alexander, which feels to me like an analogy that is arguably about AI, at least insofar as it is clearly about optimization and its dangers.
But that story of Scott Alexander’s isn’t actually very mysterious! It’s kind of a sci-fi / horror story (albeit in a fantasy setting) where it’s pretty clear what’s going on. Conversely, this story I can hardly make heads or tails of. It feels like there should be some kind of AI connection (what else is there to be mysterian about? surely not just wikipedia editing...), but if it’s there, I cannot find it.
As other commenters mention, the story’s title is a reference to a 1966 sci-fi novel, which I sadly haven’t read.
This (and to a lesser extent the “eternal september” joke) seem to pin down the date, meaning that there probably can’t be much significance to the choice of september 30th in particular for this story—the date is probably already “explained away” by Gwern presumably wanting to title the story the same as the 1966 novel. By contrast, the selection of year, 1939, remains more of a free parameter that we need to explain.
One clear theme of the story (although one I myself am not much interested in) is a bundle of stuff related to the idea of curation, of writing and editing, summarization and commentary, et cetera.
The frame story where we’re reading a translation of a review of a… book? i’m not sure exactly… by M. Trente explaining the work of an Institute founded to archive materials related to the Thirtieth.
The story feels extremely Borgesian, recalling the impossible literary conciets of “Tlon, Ubquar, Orbit Tertius”, “The Library of Babel”, and “Pierre Menard, Author of the Quixote”.
The bits about digitization, Xerox, the internet, seem like winking asides to the fact that this story exists somewhat anachronistically, and perhaps more closely reflects a wikipedia-adjacent form of online scholar / nerd culture. After all, what else but wikipedia would literally have a page devoted to all the events of September 30th? Where else but wikipedia would I go to learn about the various events being referenced in the story (the WW2 events, the first televized football game, the birth of Jean-Marie Lehn, and so on.)
There are many amusing linguistic jokes, such as the parallelism between the history of the Institute and the history of the two world wars, where the “second war in the world of the Institute” is WW2-themed followed by a cold war of sorts (”...after which the conflict chilled”) -- this in particular struck me as very similar to Gwern’s meditations on second life sentences. Or the bit about how the institute hopes that “understanding how the past understood the future can help understand the future of understanding the past”.
The loving list of obscure real-world institutes that bear some resemblence to the Institute of the story: the Museum of Jurassic Technology, the “Pith” (Pitt?) Rivers Museum, Gravity Research Foundation. (I’m not sure if the Labyrinth Sodality, Iberian Sundial Society, and the Rekord-Club Gründerzeit are also oblique references to real-world organizations, or are perhaps references to fiction.)
The bit about “hapax ephemeron”.
Perhaps what I like most about this theme-cluster, rather than the absurd scholarly self-reference and bizzare concept of taking as an object of study the cross-section of a single day, nor the paradoxical idea of spending so many days focused on the study of just one day (a la In The Future, Everyone Will Be Famous to Fifteen People), but rather the more familiar impossibility of trying to somehow recapture the subjective nature of experience, the ephemeral freshness and vividness that once belonged to all the moments making up one fall day in 1939, using only the tools of writing and historiography and so forth.
That the story ends, after so much obscure academic writing and abstract discussion, with a vividly physical description of a baseball game—“spitting chaw in the dust”—“cold, crisp, and fair, a touch of looming winter”—“the sun over the roofline half past five”—speaks to this theme, IMO. The impossibility of trying to conjure back up the vivid freshness of experience via thought and study and analysis, the contrast between the immediacy / immanence of the present versus the mind’s tendency to always zip about with innumerable little thoughts of the past and future.
See also the preceeding section which starts “The skeptical reader will soon be forced to agree”.
Note also that the gravitational arc of the baseball that ends the story—A slow lazy arc, every eye silently following it spell-gaped, past the sun, back down, toward no one but me as I casually hold up my glove for the ball, suddenly knowing it had been pitched to me by God Himself, landing with just the slightest thwack… -- is another analogy for the ephemeral passing of a single day. It’s fast-moving, it lasts only a few seconds, yet the ball is described as if suspended. Indeed, that moment is indeed suspended, as an image in the memory of the (multiple layers of fiction and frame-story quotation deep) anonymous memoirist of 1969 (coincidentally the year of the institute’s founding).
Rosier was too young in 1939 to have been the batter in that game. But he lived in Chicago at that time—perhaps he was at the game and witnessed that moment, and that crystalline memory is what spurred him to found the Institute?
But is there a yet deeper theme to the story? Or (like my disappointing experience with Unsong), is it for the most part just puns and references all the way down (plus of course the stuff I’ve described above)?
Revisit Gwern’s analysis of “Suzzane Deluge” for how deep the rabbit hole of obscured short-story themes might potentially go. (If that’s the case here, it is much further than I will be able to plumb.)
First, some dangling loose ends:
Why is the story ostensibly translated from French, about an institute in New Orleans, with assorted joking references to “gallophobia” and so forth?
What’s going on with the “dream diary results, showing anomalous ‘ψ’ predictions of October 1st”?
What’s up with the adventurous biography of Vincent Rosier? What kind of impression am I supposed to take from all the wacky details? Reproducing below:
Birth in rural Ireland, emigration to the City of Wind, Chicago, hardscrabble childhood running with youth gangs,
the precocious admission to an anthropology major followed by a contretemps while kayaking Papua New Guinea,
on behalf of the OSS, which connections proved useful in post-war import-export ventures (making, and losing, his first fortune when his curious ‘Rosier Cube’ oranges were popular, and then unpopular, in the West).
(this bit about making and losing tons of money running an import-export business selling goods during WW2, is also a plot point in Chapter 24 of the novel Catch-22, IIRC? Rosier Cube oranges are presumably akin to Japan’s “square melons”.)
He was even trusted with back-channel Soviet negotiations over the (still-classified) Mukbang Incident,
(seems like a weird reference to a modern-day phenomenon in a story ostensibly set in 1990?)
followed by restless expansion into unrelated ventures including his publishing & chemical supply conglomerate (best known for its deodorant),
no idea what this is about
shrimp fishing (vertically integrated restaurant chain),
is this a reference to the movie Forrest Gump, the actual vertically-integrated seafood company Red Lobster, or something about EA’s interest in shrimp welfare?
legendary commodities trades (famously breaking the Newcastle union through locally-delivered coal futures),
a reference to this perhaps-apocryphal story about 28000 tons of mis-delivered coal.
and an ill-fated Texas ostrich farm investment (now a minor tourist attraction).
part of an actual ’90s trend
Eventually he withdrew into Californian venture capital, a seersucker-clad gray eminence among Silicon Valley upstarts, where he amassed the Greco-Roman art collection he is chiefly remembered for.
Is there some kind of AI or early-internet connection here? Are the statues some kind of reference to right-wing “statue in profile pic” culture?? Probably not… more likely it’s a reference to the Getty museum in california?
More exerpts:
If Trente’s exponential bibliometric projections are correct, by “152 AE”, no publication on the 20th century will fail to mention the 30th.
The idea of “exponential bibliometric projections” in this context is obviously absurd, and is funny. The triumphalism of this section certaintly feels like a reference to the world-transformation project of “Tlon, Ubquar, Orbit Tertius”, as one hackernews commenter notes. But this (along with the idea that “a Palo Alto recluse has changed the earth”—many such cases!) also feels like the most direct reference (if indeed it is one) to a potential AI theme, since the initial cross-post reference to “Clarity didn’t work, trying mysterianism”—insofar as it evokes/spoofs exponential projections by the likes of Moravec, Vinge, and Kurzweil.
But note that 152 AE is the year 2091, which strikes me as a little late for scaling-pilled AGI timelines.
It makes little difference to us, as we go on revising, in our quiet countryside retirement, an encyclopedia of Casares we shall never publish.
Beyond the Borges reference, if we are trying to force an AI reading, this perhaps sounds like Gwern’s description of his own life (writing mysterian short stories, etc) amidst a world so rapidly changing that our present era is perhaps comparable to none other than late September, 1939.
But perhaps more importantly:
Why is the Institute’s motto “Lux in Tenebris Diei Unius”? Google’s AI translates as “Light in the Darkness of a Single Day”, and comments “It’s a variation of the more common phrase ‘Lux in tenebris lucet,’ which is particularly significant in Christian theology. ”
Why 1939?
Why was the institute founded? This question is never answered by the story. Are the “military historian” or “pacifist” factions closer to the truth?
If Rosier was born in 1927 then he would’ve been 12 years old in 1939, which seems too early for him to have been deeply affected by WW2 (especially considering he was living in Chicago, not anywhere in Europe).
But, “Pacifist” arguments aside, the choice of year is a free parameter in Gwern’s story and obviously significant in connection with World War 2. So, I am inclined to think that the Pacifists are wrong, and the focus of the Institute (and Gwern’s story) is somehow intimately tied up with WW2 in particular.
Other than the fact that the Thirtieth has an Institute and today does not, is there some further sense in which today is less “real” than Sept 30, 1939? The story says “Here, at the end of history, mankind has been disillusioned of ideology and symmetry, and unable to look forward, looks back.” Today, as both the story and Zarathustra and Fukuyama argued, “is no-one and no-where and no-when; and men cannot live in a utopia.”
1939 is thus perhaps more sincere (with its many zealous true believers in ideologies like Communism, Fascism, and Democracy), less ironic or cynical or world-weary, and for all those reasons perhaps more real than today.
But they are also of course more naive, ignorant, and fundamentally confused (re: those same zealous true believers in Communism and Fascism, the eagerness to launch into patriotic wars, et cetera).
Perhaps this is the essential relationship that the past always has to the future? Today, 2025, as we stand potentially on the cusp of transformative change driven by AI, and also seemingly in a world of steadily increasing international tensions and war (russia/ukraine, then israel/gaza/iran, india/pakistan, perhaps soon china/taiwan?) will probably seem similarly confused and naive and in-denial-about-what’s-obviously-coming from the standpoint of 2076, as 1939 seemed from 1990.
And finally: why is October the First “too late”?
Insofar as it’s connected to the 1966 novel: Just skimming the wikipedia page, it’s about an Earth that’s been jumbled up such that different regions are in different eras (eg, mexico is in the far-future, while greece is in classical times, and France is in 1917). This, combined with the theme of World War 2, puts me in a mind of alternate-history scenarios. (But maybe it is just meant to point me towards the jumbled-up nature of extensively studying the Thirtieth in the modern day.)
Insofar as it isn’t just a reference to the 1966 novel, then we must ask ourselves: October 1st 1939 is too late, for what?
Well, obviously it’s too late to prevent things that happened on Sept 30th or Oct 1st, namely the surrender of Warsaw to Nazi forces (on the 28th) capture of Modlin Fortress (on the 29th), Hitler’s partition of Poland and the establishment of a polish government-in-exile (on the 30th), the entry of Nazis into Warsaw (on Oct 1, starting a period of German occupation of the city until 1945), and this Oct 1 speech by Churchill further committing England to the war and calling up all men age 20-22 for conscription.
So, by Sept 30th it’s obviously already too late to prevent WW2, since it’s already happening. But perhaps it’s not too late to prevent or greatly mitigate the Holocaust (about half of the Jews who died were Poles), wheras Oct 1st somehow would be?
Or conversely, perhaps on Sept 30 it’s not too late to hope for a mitigated World War, smaller in scope than the actual World War we got? (ie, perhaps Germany invades Poland and France, but Britain looks the other way and decides to make a “separate peace” with Germany, Hitler never makes the mistake of invading the Soviet Union, and America never gets involved?) Here I’m just going off Churchill’s Oct 1 speech.
Both of these scenarios seem pretty ridiculous—the idea that on Sept 30 the holocaust or the severity of WW2 could totally have been prevented, but October the first is too late, is absurd. Surely there’s just not that much variance from day to day!
If we wanted to think more rigorously, we might say that the chances of the Holocaust happening vs not happening (or perhaps this should be graded on a sliding scale of total deaths, rather than being considered as a binary event) are initially pretty low—if you’re re-rolling history starting from the year 1900, perhaps something a Holocaust would only happen 1 in 100 times, or 1% odds? But of course, if you re-roll history starting in June 1941, then chances are almost 100%, since it’s basically already happening. What about an intermediate year, like 1935? It must have had some intermediate probability. So, looking back on history, you can imagine this phantom probability rising higher and lower each year—indeed, each day—almost like a stock price or prediction-market price. Some days (like various days on the timeline of Hitler’s rise to power) the odds would rise dramatically, other days the odds would fall. What was the single most important day, when the probability changed the most? Is that necessarily the same day as the “hingiest” day, the most influential day for that outcome? (I don’t think it necessarily is? But I’m confused about this.)
And for a more emotional, subjective judgement—at what point does one say that an event could still be prevented? At 50⁄50, it could go either way. At 80⁄20, the situation is looking grim, but there is still hope. At 95⁄5, the outcome is nearly certain—but it’s not crazy to nevertheless hope that things will change, since after all outcomes with 5% probability do occur 5% of the time. What about 99/1? At 99.99/0.01, technically you are still somewhere on the logistic success curve, but from a psychological perspective surely one must round this off to zero, and say that all hope is lost, the outcome is virtually certain. Perhaps in the world of the story, Sept 30 1939 is the moment where the last significant shred of hope disappears, and the probability (either of the Holocaust, or of a smaller-in-scope WW2, or whatever) tragically jumps from something like 98% to 99.95%.
All of the above is perhaps true, and yet the absurdity of trying to pin things down to the individual day remains. The individual days would surely look pretty insignificant even if we knew the probabilities, and of course we don’t know the probabilities. And for any prospective rather than retrospective scenario (like “will there be a nuclear war in the next 50 years”), we don’t even really know where to look, what sub-questions to ask, et cetera.
This gets back to my idea that “Today, 2025, as we stand potentially on the cusp of transformative change driven by AI, and also seemingly in a world of steadily increasing international tensions and war, will probably seem similarly confused and naive and in-denial-about-what’s-obviously-coming from the standpoint of 2076, as 1939 seemed from 1990.”
Maybe what really makes Oct 1 1939 “too late” isn’t that the objective probabilities (in some omniscient god’s-eye-view) mean it’s too late to avert some actual event, but rather that Sept 30, 1939 was the last possible date one could be in denial about it—that one could still /hope/ for the war to be short and largely harmless, rather than vast and world-wrecking. By Oct 1, the terrible truth would be all too clear. Of course in the real world, many would continue to hope against all hope, but perhaps Oct 1 was in some sense when it was finally over in the minds of all informed observers—there is going to be a second world war.
So why would you establish an Institute to study that exact moment in time? Surely because you worry that the present day is in a similar situation—a situation where we don’t know what’s coming, even though we SHOULD know what’s coming. In retrospect it will seem so obvious, and we will seem to have our heads in the sand, blind men grasping the elephant, et cetera. “Lux in Tenebris Diei Unius”—light out of the darkness of a single day. Maybe if we study September 1939, Rosier is saying, we can learn how to avoid their failure, and see the threat that’s coming to us from a direction we don’t presently even know how to look in, and thereby achieve some kind of brighter future.
I think that jumping straight to big-picture ideological questions is a mistake. But for what it’s worth, I tried to tally up some pros and cons of “ideologically who is more likely to implement a post-scarcity socialist utopia” here; I think it’s more of a mixed bag than many assume.
I think when it comes to the question of “who’s more likely to use AGI build fully-automated luxury communism”, there are actually a lot of competing considerations on both sides, and it’s not nearly as clear as you make it out.
Xi Jinping, the leader of the CCP, seems like kind of a mixed bag:
On the one hand, I agree with you that Xi does seem to be a true believer in some elements of the core socialist dream of equality, common dignity for everyone, and improved lives for ordinary people. Hence his “Common Prosperity” campaign to reduce inequality, anti-corruption drives, bragging (in an exaggerated but still-commendable way) about having eliminated extreme poverty, etc. Having a fundamentally humanist outlook and not being an obvious psychopath / destructive idiot / etc is of course very important, and always reflects well on people who meet that description.
On the other hand, as others have mentioned, the intense repression of Hong Kong, Tibet, and most of all Xinjiang, does not bode super well if we are thinking “who seems like a benevolent guy in which to entrust the future of human civilization”. In terms of scale and intensity, the extent of the anti-Uyghur police state in Xinjiang seems beyond anything that the USA has done to their own citizens.
More broadly, China generally seems to have less respect for individual freedoms, and instead positions themselves as governing for the benefit of the majority. (Much harsher covid lockdowns are an example of this, as is their reduced freedom of speech, fewer regulations protecting the environment or private property, etc. Arguably benefits have included things like faster pace of development, fewer covid deaths, etc.) This effect could cut both ways—respect for individual freedoms is pretty important, but governing for the benefit of the majority is by definition gonna benefit most ordinary people if you do it well.
Your comment kind of assumes that china = socialist and socialism = more willingness to “redistribute resources in an equitable manner”. But Xi has taken pains to explain that he is very opposed to what he calls “welfarism”—in his view, socialism doesn’t involve China handing out subsidized healthcare, retirement benefits, etc to a “lazy” population, like we do here in the decadent west. This attitude might change in the future if AGI generates tons of wealth (right now they are probably afraid that Chinese versions of social security and medicare might blow a hole in the government budget, just like it is currently blowing a hole in the US budget)...
...But it also might not! Xi generally seems weirdly unconcerned with the day-to-day suffering of his people, not just in a “human rights abuses against minorities” sense, but also in the sense that he is always banning “decadent” forms of entertainment like videogames, boy bands, etc, telling young people to suck it up and “eat bitterness” because hardship builds character, etc.
China has been very reluctant to do western-style consumer stimulus to revive their economy during recessions—instead of helping consumers afford more household goods and luxuries, Xi usually wants to stimulate the economy by investing in instruments of national military/industrial might, subsidising strategic areas like nuclear power, aerospace, quantum computing, etc.
Meanwhile on the American side, I’d probably agree with you that the morality of America’s current national leaders strikes me as… leaving much to be desired, to put it lightly. Personally, I would give Trump maybe only 1 or 1.5 points out of three on my earlier criteria of “fundamentally humanist outlook + not a psychopath + not a destructive idiot”.
But America has much more rule of law and more checks-and-balances than China (even as Trump is trying to degrade those things), so the future of AGI would perhaps not as much be solely in the hands of the one guy at the top.
And also, more importantly IMO, America is a democracy, which means a lot can change every four years and the population will have more of an ability to give feedback to the government during the early years of AGI takeoff.
In particular, beyond just swapping out the current political party for leaders from the other political party, I think that if ordinary people’s economic position changed very dramatically due to the introduction of AGI, American politics would probably also shift very rapidly. Under those conditions, it actually seems pretty plausible that America could switch ideologies to some kind of Georgist / socialist UBI-state that just pays lip service to the idea of capitalism—kinda like how China after Mao switched to a much more capitalistic system that just pays lip service (“socialism with chinese characteristics”) to many of the badly failed policies of Maoism. So I think the odds of “the US stays staunchly capitalist” are lower than the odds of “China stays staunchly whatever-it-is-currently”, just because America will get a couple opportunities to radically change direction between now and whenever the long-term future of civilization gets locked in, wheras China might not.
In contrast to our current national leaders, some of the leaders of top US AI labs strike me as having pretty awesome politics, honestly. Sam Altman, despite his numerous other flaws, is a Georgist and a longtime supporter of UBI, and explicitly wants to use AI to achieve a kind of socialist utopia. Dario Amodei’s vision for the future of AI is similarly utopian and benevolent, going into great detail about how he hopes AI will help cure or prevent most illness (including mental illness), help people be the best versions of themselves, assist the economic development of poor countries, help solve international coordination problems to lead to greater peace on earth, etc. Demis Hassabis hasn’t said as much (as far as I’m aware), but his team has the best track record of using AI to create real-world altruistic benefits for scientific and medical progress, such as by creating Alphafold 3. Maybe this is all mere posturing from cynical billionares. But if so, the posturing is quite detailed and nuanced, indicating that they’ve thought seriously about these views for a long time. By contrast, there is nothing like this coming out of Deepseek (which is literally a wall-street style hedge fund combined with an AI lab!) or other Chinese AI labs.
Finally, I would note that you are basically raising concerns about humanity’s “gradual disempowerment” through misaligned economic and political processes, AI concentration-of-power risks where a small cadre of capricious national leaders and insiders gets to decide the fate of humanity, etc. Per my other comment in this thread, these types of AI safety concerns seem like right now they are being discussed almost exclusively in the West, and not in China. (This particular gradual-disempowerment stuff seems even MORE lopsided in favor of the West, even compared to superintelligence / existential risk concerns in general, which are already more lopsided in favor of the West than the entire category of AI safety overall.) So… maybe give some weight to the idea that if you are worried about a big problem, the problem might be more likely to get solved in the country where people are talking about the problem!
@Tomás B. There is also vastly less of an “AI safety community” in China—probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China’s “AI safety research” is probably focused on things like reducing LLM hallucinations, making sure it doesn’t make politically incorrect statements, etc.)
Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
Have any chinese labs published “responsible scaling plans” or tiers of “AI Safety Levels” as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they’re planning to approach the challenge of aligning superintelligence?
Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who’ve left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of “US” vs “Chinese” AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws—both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity’s future.
But before we even get to that question of “What would national leaders do with an aligned superintelligence, if they had one,” we must answer the question “Do this nation’s AI labs seem likely to produce an aligned superintelligence?” Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don’t have any kind of plan for how you’re going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven’t thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed—has Trump thought about superintelligence? Obviously not—just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who “take AI seriously” in one way or another—sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today’s embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China’s government is more opaque, so maybe they’re thinking about this stuff too. But all public evidence suggests to me that they’re kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
- How confident are you that it’s preferable for America to develop AGI before China does? by (EA Forum; 22 Feb 2025 13:37 UTC; 218 points)
- 's comment on Why Should I Assume CCP AGI is Worse Than USG AGI? by (24 Apr 2025 17:52 UTC; 8 points)
- 's comment on Why Should I Assume CCP AGI is Worse Than USG AGI? by (24 Apr 2025 18:14 UTC; 2 points)
Thanks for this informative review! (May I suggest that The Witness is a much better candidate for “this generation’s Myst”!)
Fort Collins EA/Rationalist/ACX Meetup
That’s a good point—the kind of idealized personal life coach / advisor Dario describes in his post “Machines of Loving Grace” is definitely in a sense a personality upgrade over Claude 3.7. But I feel like when you think about it more closely, most of the improvements from Claude to ideal-AI-life-coach are coming from non-personality improvements, like:
having a TON of context about my personal life, interests, all my ongoing projects and relationships, etc
having more intelligence (including reasoning ability, but also fuzzier skills like social / psychological modeling) to bring to bear on brainstorming solutions to problems, or identifying the root cause of various issues, etc. (does the idea of “superpersuasion” load more heavily on superintelligence, or on “superpersonality”? seems like a bit of both; IMO you would at least need considerable intelligence even if it’s somehow mostly tied to personality)
even the gains that I’d definitely count as personality improvements, might not all come primarily from more-tasteful RLHF creating a single, ideal super-personality (like what Claude currently aims for). Instead, an ideal AI advisor product would probably be able to identify the best way of working with a given patient/customer, and tailor its personality to work well with that particular individual. RLHF as practiced today can do this to a limited extent (ie, claude can do things like sense whether a formal vs informal style of reply would be more appropriate, given the context), but I feel like new methods beyond centralized RLHF might be needed to fully customize an AI’s personality to each individual.
Semi-related: if I’m reading OpenAI’s recent post “How we think about safety and alignment” correctly, they seem to announce that they’re planning on implementing some kind of AI Control agenda. Under the heading “iterative development” in the section “Our Core Principles” they say:
In the future, we may see scenarios where the model risks become unacceptable even relative to benefits. We’ll work hard to figure out how to mitigate those risks so that the benefits of the model can be realized. Along the way, we’ll likely test them in secure, controlled settings. We may deploy into constrained environments, limit to trusted users, or release tools, systems, or technologies developed by the AI rather than the AI itself.
Given the surrounding context in the original post, I think most people would read those sentences as saying something like: “In the future, we might develop AI with a lot of misuse risk, ie AI that can generate compelling propaganda or create cyberattacks. So we reserve the right to restrict how we deploy our models (ie giving the biology tool only to cancer researchers, not to everyone on the internet).”
But as written, I think OpenAI intends the sentences above to ALSO cover AI control scenarios like: “In the future, we might develop misaligned AIs that are actively scheming against us. If that happens, we reserve the right to continue to use those models internally, even though we know they’re misaligned, while using AI control techniques (‘deploy into constrained environments, limit to trusted users’, etc) to try and get useful superalignment work out of them anyways.”
I don’t have a take on the pros/cons of a control agenda, but I haven’t seen anyone else note this seeming policy statement of OpenAI’s, so I figured I’d write it up.
The 100 − 150 ton numbers that SpaceX has offered over the years are always referring to the fully-reusable version launching to LEO. I believe even Falcon 9 (though not Falcon Heavy) has essentially stopped offering expendable flights; the vision for Starship is for them to be flying full-reusable all the time.
That said:
I forget where I got this impression (Eric Berger reporting, possibly?), but IIRC right now they’re not on track to hit their goal numbers; the first reliably-working version of Starship might be limited to more like 50-70 tons, because the ship came in heavier than expected (all those heat tiles! plus just a lot of steel.) and the Raptor engine, while very impressive, has perhaps not fully achieved the nigh-miraculous targets they set for themselves.
if you want to take 100 tons, not to LEO, but to Mars (which is the design goal of the system) then you have to use many starships to ferry fuel to refuel other starships, gradually boosting their orbit until you have a fully-fueled ship in a highly elliptical earth orbit, and then you can finally blast off to Mars. For the moon it’s even worse, you need maybe 20 refueling flights to land 1 starship on the moon with enough fuel to come back.
Agreed with you that the heat shield (and reusable upper stage in general) seems like it could easily just never work (or work but only with expensive refurbishment, or only from returning from LEO orbits not anything higher-energy, or etc), perhaps forcing them to give up and have Starship become essentially a big scaled-up Falcon 9. This would still be cheaper per-kg than Falcon 9 (economies of scale, and the Raptor engines are better than Merlin, etc), but not as transformative. I think many people are just kind of assuming “eh, SpaceX is full of geniuses, they’ve done so many astounding things, they’ll figure out the heat shield”, but this is an infamously hard problem (see Shuttle, Orion, X-33...), so possibly they’ll fail!
Some other tidbits:
Raptor’s claimed vacuum ISP is 380; I don’t think they’re just, like, making this up (they have done lots of tests, flown it many times, etc—it’s not a hypey future projection like “Starship will cost $4m per flight”), but I also don’t know where I’d go if I wanted to prove to myself that the number is legit (wikipedia just cites an Elon tweet...).
Apparently those Starlink mass simulators actually weigh about 2 tons each?? So flight 7, which carried 10 Starlink simulators, actually put 20 tons of payload in orbit.
The first reliable version of Starship will very likely fall short of its intended 100 ton goal (i mean… unless it takes them a really long time to make Starship reliable, lol). But they also plan to stretch the rocket, refine the engine, maybe someday make the whole thing wider, etc. So I expect that they’ll eventually hit 100 tons. (The first version of Falcon 9 could only lift 10.4 tons to LEO; the current version can lift 17.5 tons AND land the first stage on a drone ship for reuse!) But of course if you make the whole ship bigger, some of your launch costs are gonna go up too.
Personally I’m doubtful that they ever hit the crazy-ambitious $20/kg mark, which (per Thomas Kwa) would require not just a reusable upper stage (very hard!) but also hyper low-cost, airline-like turnaround on every part of the operation. But $200/kg (1 OOM cheaper from where Falcon 9 is today, using the rumored internal cost of $30m/launch and 17.5 ton capacity) seems pretty doable—upper stage reuse (even if somewhat ardurous to refurbish) probably cuts your costs by like 4x, and the much greater physical size of Starship might give you another almost 2x. Cheap materials (steel and methane vs aluminum and RP1) + economies of scale in Raptor manufacturing might take you the rest of the way.