I’m talking about using a laser sail to get up to near c (0.1 g acceleration for 40 lightyears is pretty strong) in the first place, and slowing down by other means.
This trick is about using a laser sail for both acceleration and deceleration.
Yeah, I think the original proposal for a solar sail involved deceleration by having the central part of the sail detach and receive the reflected beam from the outer “ring” of the sail. I didn’t do this because IIRC the beam only maintains coherence over 40 lightyears or so, so that trick would be for nearby missions.
For 1, the mental model for non-relativistic but high speeds should be “a shallow crater is instantaneously vaporized out of the material going fast” and for relativistic speeds, it should be the same thing but with the vaporization directed in a deeper hole (energy doesn’t spread out as much, it keeps in a narrow cone) instead of in all directions. However, your idea of having a spacecraft as a big flat sheet and being able to tolerate having a bunch of holes being shot in it is promising. The main issue that I see is that this approach is incompatible with a lot of things that (as far as we know) can only be done with solid chunks of matter, like antimatter energy capture, or having sideways boosting-rockets, and once you start armoring the solid chunks in the floaty sail, you’re sort of back in the same situation. So it seems like an interesting approach and it’d be cool if it could work but I’m not quite sure it can (not entirely confident that it couldn’t, just that it would require a bunch of weird solutions to stuff like “how does your sheet of tissue boost sideways at 0.1% of lightspeed”.
For 2, the problem is that the particles which are highly penetrating are either unstable (muons, kaons, neutrons...) and will fall apart well before arrival (and that’s completely dodging the issue of making bulk matter out of them), or they are stable (neutrinos, dark matter), and don’t interact with anything, and since they don’t really interact with anything, this means they especially don’t interact with themselves (well, at least we know this for neutrinos), so they can’t hold together any structure, nor can they interact with matter at the destination. Making a craft out of neutrinos is ridiculously more difficult than making a craft out of room-temperature air. If they can go through a light-year of lead without issue, they aren’t exactly going to stick to each other. Heck, I think you’d actually have better luck trying to make a spaceship out of pure light.
For 3, it’s because in order to use ricocheting mass to power your starcraft, you need to already have some way of ramping the mass up to relativistic speeds so it can get to the rapidly retreating starcraft in the first place, and you need an awful lot of mass. Light already starts off at the most relativistic speed of all, and around a star you already have astronomical amounts of light available for free.
For 4, there sort of is, but mostly not. The gravity example has the problem of the speeding up of the craft when it has the two stars ahead of it perfectly counterbalancing the backwards deceleration when the two stars are behind it. For potentials like gravity or electrical fields or pretty much anything you’d want to use, there’s an inverse-square law for them, which means that they aren’t really relevant unless you’re fairly close to a star. The one instance I can think of where something like your approach is the case is the electric sail design in the final part. In interstellar space, it brakes against the thin soup of protons as usual, but nearby a star, the “wind” of particles streaming out from the star acts as a more effective brake and it can sail on that (going out), or use it for better deceleration (coming in). Think of it as a sail slowing a boat down when the air is stationary, and slowing down even better when the wind is blowing against you.
Whoops, I guess I messed up on that setting. Yeah, it’s ok.
Actually, no! The activation energy for the conversion of diamond to graphite is about 540 kJ/mol, and using the Arrhenius equation to get the rate constant for diamond-graphite conversion, with a radiator temperature of 1900 K, we get that after 10,000 years of continuous operation, 99.95% of the diamond will still be diamond. At room temperature, the diamond-to-carbon conversion rate is slow enough that protons will decay before any appreciable amount of graphite is made.
Even for a 100,000 year burn, 99.5% of the diamond will still be intact at 1900 K.
There isn’t much room to ramp up the temperature, though. We can stick to around 99%+ of the diamond being intact up to around 2100 K, but 2200 K has 5% of the diamond converting, 2300 K has 15% converting, 2400K has 45%, and it’s 80 and 99% conversion of diamond into graphite over 10,000 years for 2500 K and 2600 K respectively.
Agreed. Also, there’s an incentive to keep thinking about how to go faster until the marginal gain in design by one day of thought speeds the rocket up by less than one day, instead of launching, otherwise you’ll get overtaken, and agreeing on a coordinated plan ahead of time (you get this galaxy, I get that galaxy, etc...) to avoid issues with lightspeed delays.
Or maybe accepting messages from home (in rocket form or not) of “whoops, we were wrong about X, here’s the convincing moral argument” and acting accordingly. Then the only thing to be worried about would be irreversible acts done in the process of colonizing a galaxy, instead of having a bad “living off resources” endstate.
Edited. Thanks for that. I guess I managed to miss both of those, I was mainly going off of the indispensable and extremely thorough Atomic Rockets site having extremely little discussion of intergalactic missions as opposed to interstellar missions.
It looks like there are some spots where me and Armstrong converged on the same strategy (using lasers to launch probes), but we seem to disagree about how big of a deal dust shielding is, how hard deceleration is, and what strategy to use for deceleration.
Yeah, Atomic Rockets was an incredibly helpful resource for me, I definitely endorse it for others.
This doesn’t quite seem right, because just multiplying probabilities only works when all the quantities are independent. However, I’d put higher odds on someone having the ability to recognize a worthwhile result conditional on them having an ability to work on a problem, then having the ability to recognize a worthwhile result, so the multiplication of probabilities will be higher than it seems at first.
I’m unsure whether this consideration affects whether the distribution would be lognormal or not.
(lightly edited restatement of email comment)
Let’s see what happens when we adapt this to the canonical instance of “no, really, counterfactuals aren’t conditionals and should have different probabilities”. The cosmic ray problem, where the agent has the choice between two paths, it slightly prefers taking the left path, but its conditional on taking the right path is a tiny slice of probability mass that’s mostly composed of stuff like “I took the suboptimal action because I got hit by a cosmic ray”.
There will be 0 utility for taking left path, −10 utility for taking the right path, and −1000 utility for a cosmic ray hit. The CDT counterfactual says 0 utility for taking left path, −10 utility for taking the right path, while the conditional says 0 utility for left path, −1010 utility for right path (because conditional on taking the right path, you were hit by a cosmic ray).
In order to get the dutch book to go through, we need to get the agent to take the right path, to exploit P(cosmic ray) changing between the decision time and afterwards. So the initial bet could be something like −1 utility now, +12 utility upon taking the right path and not being hit by a cosmic ray. But now since the optimal action is “take the right path along with the bet”, the problem setup has been changed, and we can’t conclude that the agent’s conditional on taking the right path places high probability on getting hit by a cosmic ray (because now the right path is the optimal action), so we can’t money-pump with the “+0.5 utility, −12 utility upon taking a cosmic ray hit” bet.
So this seems to dutch-book Death-in-Damascus, not CDT≠EDT cases in general.
Yes, UDT means updateless decision theory, “the policy” is used as a placeholder for “whatever policy the agent ends up picking”, much like a variable in an equation, and “the algorithm I wrote” is still unpublished because there were too many things wrong with it for me to be comfortable putting it up, as I can’t even show it has any nice properties in particular. Although now that you mention it, I probably should put it up so future posts about what’s wrong with it have a well-specified target to shoot holes in. >_>
It actually is a weakening. Because all changes can be interpreted as making some player worse off if we just use standard Pareto optimality, the second condition mean that more changes count as improvements, as you correctly state. The third condition cuts down on which changes count as improvements, but the combination of conditions 2 and 3 still has some changes being labeled as improvements that wouldn’t be improvements under the old concept of Pareto Optimality.
The definition of an almost stratified Pareto optimum was adapted from this , and was developed specifically to address the infinite game in that post involving a non-well-founded chain of players, where nothing is a stratified Pareto optimum for all players. Something isn’t stratified Pareto optimal in a vacuum, it’s stratified Pareto optimal for a particular player. There’s no oracle that’s stratified Pareto optimal for all players, but if you take the closure of everyone’s SPO sets first to produce a set of ASPO oracles for every player, and take the intersection of all those sets, there are points which are ASPO for everyone.