This was really interesting. I’ve thought of this comment on-and-off for the last month.
You raised an interesting reason for thinking that transhumans would have high anthropic measure. But if you have a reference-class based anthropic theory, couldn’t transhumans have a lot of anthropic measure, but not be in our reference class (that is, for SSA, we shouldn’t reason as if we were selected from a class containing all humans and transhumans)?
Even if we think that the reference class should contain transhumans, do we have positive reasons for thinking that it should contain organisations?
One thought is that you might reject reference classes in anthropic reasoning (even under SSA). Is that the case?
Yes, that seems an important case to consider.
You might still think the analysis in the post is relevant if there are actors that can shape the incentive gradients you talk about: Google might be able to focus its sub-entities in a particular way while maintaining profit or a government might choose to implement more or less oversight over tech companies.
Even with the above paragraph, it seems like the relative change-over-time in resources and power of the strategic entities would be important to consider, as you point out. In this case, it seems like (known) fast takeoffs might be safer!
I talked to a couple of people in relevant organisations about possible info hazards for talking about races (not because this model is sophisticated or non-obvious, but because it contributes to general self-fulfilling chattering). Amongst those I talked to, they were not worried about (a) simple pieces with at least some nuance in general and (b) this post in particular
Comment here if you have structure/writing complaints for the post
Comment here if you are worried about info-hazard-y-ness of talking about AI races
Comment here if there are maths problems
Having the rules in the post made me think you wanted new suggestions in this thread. The rest of the post and habryka’s comment point towards new comments in the old thread.
If you want people to update the old thread, I would either remove the rules from this post, or add a caveat like “Remember, when you go to post in that thread, you should follow the rules below”
I’ve been trying this for a couple of weeks now. It’s hard! I often will have a missing link in the distraction chain: I know something that came at point X in the distraction chain and X-n, for n > 0. When I try and probe the missing part it’s pretty uncomfortable. Like using or poking a numb limb. It can be pretty aversive, so I can’t bring myself to do this meditation every time I meditate.
This changed my mind about the parent comment (I think the first paragraph would have done so, but the example certainly helped).
In general, I don’t mind added concreteness even at the cost of some valence-loading. But seeing how well “sanction” works and some other comments that seem to disagree on the exact meaning of “punch”, I guess not using “punch” would have been better
I did indeed! So I guess this game fails (5) out of Zvi’s criteria.
Does your program assume that the Kelly bet stays a fixed size, rather than changing?
Here’s a program you can paste in your browser that finds the expected value from following Kelly in Gurkenglas’ game (it finds EV to be 20)
(You can also fiddle with the first argument to experiment to see some of the effects when 4 doesn’t hold)
It sounds like in the first part of your post you’re disagreeing with my choice of reference class when using SSA? That’s reasonable. My intuition is that if one ends up using a reference class-dependent anthropic principle (like SSA) that transhumans would not be part of our reference class, but I suppose I don’t have much reason to trust this intuition.
On anthropic measure being tied to independently-intelligent minds, what is the difference between an independently- and dependently-intelligent mind? What makes you think the mind needs to be specifically independently-intelligent?
Yes, I suppose the only way that this would not be an issue is if the aliens are travelling at a very high fraction of the speed of light and inflation means that they will never reach spatially distant parts of the Universe in time for this to be an issue.
In SETI-attack, is the idea that the information signals are disruptive and cause the civilisations they may annihilate to be too disrupted (perhaps by war or devastating technological failures) to defend themselves?
Yeah, that’s a good point. I will amend that part at some point.
Also, the analysis might have some predictions if civilisations don’t pass through a (long) observable stage before they start to expand. It increases the probability that a shockwave of intergalactic expansion will arrive at Earth soon. Still, if the region of our past light cone where young civilisations might exist is small enough, we probably just lose information on where the filter is likely to be
I wonder if there are any plausible examples of this type where the constraints don’t look like ordering on B and search on A.
To be clear about what I mean about those constraints, here’s an example. One way you might be able to implement this function is if you can enumerate all the values of A and then pick the maximum B according to some ordering. If you can’t enumerate A, you might have some strategy for searching through it.
But that’s not the only feasible strategy. For example, if you can order B, take two elements of B to C and order C, you might do something like taking the element of B that, together with the value less than it, takes you to the greatest C.
My question is whether these weirder functions have any interest
I wasn’t aware that CFAR had workshops in Europe before this comment. I applied for a workshop off the back of this. Thanks!
I feel a pull towards downvoting this. I am not going to, because I think this was posted in good faith, and as you say, it’s clear a lot of time and effort has gone into these comments. That said, I’d like to unpack my reaction a bit. It may be you disagree with my take, but it may also be there’s something useful in it.
[EDIT: I should disclaim that my reaction may be biased from having recently received an aggressive comment.]
First, I should note that I don’t know why you did these edits. Did sarahconstantin ask you to? Did you think a good post was being lost behind poor presentation? Is it to spell out your other comment in more detail? Knowing the answer to this might have changed my reaction.
My most important concern is why this feedback was public. The only (charitable) reason I can think of is to give space for pushback of the kind that I am giving.
My other major concern is presentation. The sentence ‘I trust that you can see past the basic “I’m being attacked” feeling and can recognise the effort and time that has gone into the comments’ felt to me like a status move and potentially upsetting someone then asking them to say thank you.
It is probably true that those are the places with most engagement. However, as someone without Facebook, I’m always grateful for things (also) being posted in non-FB places (mailing lists work too, but there is a longer lag on finding out about things that way).