Personal website: https://outsidetheasylum.blog/ Feedback about me: https://www.admonymous.co/isaacking
Isaac King(Isaac King)
I don’t understand what “at the start” is supposed to mean for an event that lasts zero time.
I don’t think you understand how probability works.
https://outsidetheasylum.blog/understanding-subjective-probabilities/
Ok now I’m confused about something. How can it be the case that an instantaneous perpendicular burn adds to the craft’s speed, but a constant burn just makes it go in a circle with no change in speed?
...Are you just trying to point out that thrusting in opposite directions will cancel out? That seems obvious, and irrelevant. My post and all the subsequent discussion are assuming burns of epsilon duration.
I don’t understand how that can be true? Vector addition is associative; it can’t be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors’ sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship’s trajectory as throwing both rocks at the same time.
How is that relevant? In the limit where the retrograde thrust is infinitesimally small, it also does not increase the length of the main vector it is added to. Negligibly small thrust results in negligibly small change in velocity, regardless of its direction.
Unfortunately I already came across that paradox a day or two ago on Stack Exchange. It’s a good one though!
Yeah, my numerical skill is poor, so I try to understand things via visualization and analogies. It’s more reliable in some cases, less in others.
when the thrust is at 90 degrees to the trajectory, the rocket’s speed is unaffected by the thrusting, and it comes out of the gravity well at the same speed as it came in.
That’s not accurate; when you add two vectors at 90 degrees, the resulting vector has a higher magnitude than either. The rocket will be accelerated to a faster speed.
I don’t think so. The difference in the gravitational field between the bottom point of the swing arc and the top is negligible. The swing isn’t an isolated system, so you’re able to transmit force to the bar as you move around.
There’s a common explanation you’ll find online of how swings work by you changing the height of your center of mass, which is wrong, since it would imply that a swing with rigid bars wouldn’t work. But they do.
The actual explanation seems to be something to do with changing your angular momentum at specific points by rotating your body.
I’m still confused about some things, but the primary framing of “less time spent subject to high gravitational deceleration” seems like the important insight that all other explanations I found were missing.
An Actually Intuitive Explanation of the Oberth Effect
Probability is a geometric scale, not an additive one. An order of magnitude centered on 10% covers ~1% − 50%.
https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities
Feel free to elaborate on the mistakes and I’ll fix them.
That article isn’t about e/acc people and doesn’t mention them anywhere, so I’m not sure why you think it’s intended to be. The probability theory denial I’m referencing is mostly on Twitter.
Great point! I focused on AI risk since that’s what most people I’m familiar with are talking about right now, but there are indeed other risks, and that’s yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that’s easily mapped to concepts like “unearned confidence”, the onlooker is more likely to dismiss whatever you’re saying.
It’s literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don’t know how to get out of here without reference to probabilities and expected values.
If that comes up, yes. But then it’s them who have brought up the fact that probability is relevant, so you’re not the one first framing it like that.
This kinda misses greater picture? “Belief that here is a substantial probability of AI killing everyone” is 1000x stronger shibboleth and much easier target for derision.
Hmm. I disagree, not sure exactly why. I think it’s something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it’s framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don’t actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.
Consider all the people who reject the label of “effective altruist”, but try to donate to effective charity anyway. That seems like a good thing to me; some people don’t want to be associated with the tribe for some political reason, and if they’re still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of “doomer” or “rationalist”, but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.
Stop talking about p(doom)
I don’t see how they would be. If you do see a way, please share!
I don’t understand how either of those are supposed to be a counterexample. If I don’t know what seat is going to be chosen randomly each time, then I don’t have enough information to distinguish between the outcomes. All other information about the problem (like the fact that this is happening on a plane rather than a bus) is irrelevant to the outcome I care about.
This does strike me as somewhat tautological, since I’m effectively defining “irrelevant information” as “information that doesn’t change the probability of the outcome I care about”. I’m not sure how to resolve this; it certainly seems like I should be able to identify that the type of vehicle is irrelevant to the question posed and discard that information.
No, I think what I said was correct? What’s an example that you think conflicts with that interpretation?
If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.