Honoring Neil Armstrong isn’t anywhere near the top of the list of reasons to avoid an existential disaster. Hence it’s incorrect to say that we would be doing that to honor him.
Never again will a human being be able to experience being the first to walk another world.
(Experience is a physical phenomenon that can be (re)created, although in this case it would have to involve false beliefs. That would still hold even if Armstrong never died.)
You may give him too little credit; he seems to have been key to the successful landing:
When Armstrong noticed they were heading towards a landing area which he believed was unsafe, he took over manual control of the LM, and attempted to find an area which seemed safer, taking longer than expected, and longer than most simulations had taken.[67] For this reason, there was concern from mission control that the LM was running low on fuel.[68] Upon landing, Aldrin and Armstrong believed they had about 40 seconds worth of fuel left, including the 20 seconds worth of fuel which had to be saved in the event of an abort.[69]
Wikipedia Such unexpected last-minute split-second decision-making is precisely the contribution a pilot could make.
I didn’t know that. Still, this doesn’t get us anywhere close to justifying the typical extreme focus on a single person in thinking about this huge endeavor. Even saying that he was just a bystander, something that is false and so shouldn’t be claimed, seems much closer to the truth than saying that he was responsible for the event.
Both were instrumental to a successful landing. Aldrin was busy dealing with things like cockpit alarms and repeated reboots of the main navigation and control computer. (Said computer was busy doing things like keeping the LM upright and not spinning.) The landing would not have been successful without last minute decision making by both crew members.
Honoring Neil Armstrong isn’t anywhere near the top of the list of reasons to avoid an existential disaster. Hence it’s incorrect to say that we would be doing that to honor him.
One of my strongest motivations is a feeling of personal loyalty to my heroes, though I probably wouldn’t count Armstrong among them ( but I think he’s a cool guy ). Surely I can at least try to live up to the example set by men like Turing, Tesla, and Boltzmann, who sacrificed so much to advance human understanding so far.
Surely I can at least try to live up to the example set by men like Turing, Tesla, and Boltzmann, who sacrificed so much to advance human understanding so far.
I expect most of them enjoyed the ride, so describing the process in terms of instrumentally justified personal sacrifice seems inaccurate.
One person that may have actually played a non-replaceable role was Hal Laning; according to that article he wrote some particularly tricky code that turned out to be critical for the mission.
IMO, existential paranoia makes sense in the same way it makes sense for an engineer to be paranoid about a bridge, plane, or nuclear power plant they are building: Lives are at stake and there’s no “redo” button if you don’t get it right the first time.
Wait a minute, that is a non-epistemic justification for a propositional claim. You normatively should build huge safety margins into your bridges, but it’s still erroneous to falsely overestimate the risk of a bridge collapse, even if that belief motivates the engineer to work harder.
I agree. If I had paid more attention to the discussion, I might have realized that in this case “paranoia” was strictly in reference to probability estimates and not in reference to emotions or resource allocation. Sorry everybody.
It’s not the prediction of a sensible theory of evolution. It has nothing whatsoever to do with evolution, and I struggle to figure out where the idea that it does comes from. The idea, correct or incorrect, is the result of the extrapolation of several, independent trends (in particular, nanotech and AI). We’ve managed not to kill ourselves so far, but that’s partly a matterofluck. Even if the only way we could kill ourselves was with nuclear weapons, there’s still a nontrivial chance that we would. Especially with India and Pakistan in on the game now. And there are new threats as well.
Edit: I don’t necessarily think that existential disaster is more probable than not, but I definitely think it shouldn’t be dismissed out of hand. And since people are downvoting this, I’m wondering where they disagree with that.
Evolutionary progress has an element of luck (sure we could be wiped out by a meteorite tomorrow) but negative events so far have been relatively rare.
IMO, you’re reading your trend lines wrong—failing to properly account for the decrease in warfare and the rise of surveillance technology.
We are not talking about a “nontrivial chance” here. We are talking about “existential disaster seems likely”. I read that as meaning the chances seem greater than 50%.
He was not responsible. He participated.
Doesn’t follow, existential disaster seems likely.
Honoring Neil Armstrong isn’t anywhere near the top of the list of reasons to avoid an existential disaster. Hence it’s incorrect to say that we would be doing that to honor him.
(Experience is a physical phenomenon that can be (re)created, although in this case it would have to involve false beliefs. That would still hold even if Armstrong never died.)
You may give him too little credit; he seems to have been key to the successful landing:
Wikipedia Such unexpected last-minute split-second decision-making is precisely the contribution a pilot could make.
I didn’t know that. Still, this doesn’t get us anywhere close to justifying the typical extreme focus on a single person in thinking about this huge endeavor. Even saying that he was just a bystander, something that is false and so shouldn’t be claimed, seems much closer to the truth than saying that he was responsible for the event.
Both were instrumental to a successful landing. Aldrin was busy dealing with things like cockpit alarms and repeated reboots of the main navigation and control computer. (Said computer was busy doing things like keeping the LM upright and not spinning.) The landing would not have been successful without last minute decision making by both crew members.
One of my strongest motivations is a feeling of personal loyalty to my heroes, though I probably wouldn’t count Armstrong among them ( but I think he’s a cool guy ). Surely I can at least try to live up to the example set by men like Turing, Tesla, and Boltzmann, who sacrificed so much to advance human understanding so far.
I expect most of them enjoyed the ride, so describing the process in terms of instrumentally justified personal sacrifice seems inaccurate.
One person that may have actually played a non-replaceable role was Hal Laning; according to that article he wrote some particularly tricky code that turned out to be critical for the mission.
I don’t think that’s a prediction of any sensible theory of evolution. It seems more like existential paranoia.
IMO, existential paranoia makes sense in the same way it makes sense for an engineer to be paranoid about a bridge, plane, or nuclear power plant they are building: Lives are at stake and there’s no “redo” button if you don’t get it right the first time.
Wait a minute, that is a non-epistemic justification for a propositional claim. You normatively should build huge safety margins into your bridges, but it’s still erroneous to falsely overestimate the risk of a bridge collapse, even if that belief motivates the engineer to work harder.
I agree. If I had paid more attention to the discussion, I might have realized that in this case “paranoia” was strictly in reference to probability estimates and not in reference to emotions or resource allocation. Sorry everybody.
I don’t think that makes it OK to systematically paint an inaccurate picture of the risk to help drum up support for your cause.
I agree.
It’s not the prediction of a sensible theory of evolution. It has nothing whatsoever to do with evolution, and I struggle to figure out where the idea that it does comes from. The idea, correct or incorrect, is the result of the extrapolation of several, independent trends (in particular, nanotech and AI). We’ve managed not to kill ourselves so far, but that’s partly a matter of luck. Even if the only way we could kill ourselves was with nuclear weapons, there’s still a nontrivial chance that we would. Especially with India and Pakistan in on the game now. And there are new threats as well.
Edit: I don’t necessarily think that existential disaster is more probable than not, but I definitely think it shouldn’t be dismissed out of hand. And since people are downvoting this, I’m wondering where they disagree with that.
Evolutionary progress has an element of luck (sure we could be wiped out by a meteorite tomorrow) but negative events so far have been relatively rare.
IMO, you’re reading your trend lines wrong—failing to properly account for the decrease in warfare and the rise of surveillance technology.
We are not talking about a “nontrivial chance” here. We are talking about “existential disaster seems likely”. I read that as meaning the chances seem greater than 50%.
I find it very, very hard to estimate the actual chances of any particular existential disaster. I would not put that chance below 20% this century.