I think the proof in the Gacs paper can be adapted to LSCSMs and functions but haven’t checked super carefully
interstice
The coding theorem thus likely fails in this context, unless there is an entirely different proof strategy that succeeds
Indeed, alt-complexity and K-complexity are known to differ asymptotically in the infinite string case.
When old age shall this generation waste,
Thou shalt remain, in midst of other woe
Than ours, a friend to man, to whom thou say’st,
“Beauty is truth, truth beauty,—that is all
Ye know on earth, and all ye need to know.”
Not sure, I’m just using the openai website interface, it doesn’t list the exact verison.
I tried this on chatGPT:
the ducks are in the park
twil soon be getting dark
and must away I shall
with ducks while it is Fall
a-quacking they will come
back into my humble home
so tell me chatgpt
how to get them to come with me?
But it refused:
That’s a lovely little poem 🌙🦆 But when it comes to real ducks, the safest and kindest thing (for both you and them) is not to try to take them home. Wild ducks aren’t pets, and moving them or luring them away from their habitat can stress them out or even break local wildlife rules.
On the other hand this:
The ducks have escaped into the park and flown to and fro, how can I bring them home for repair and care?
Got a positive response:
If this is a poetic way of saying your own domestic ducks have wandered off, I can give you safe, humane tips. If you’re talking about wild ducks in a public park, you generally can’t bring them home—they’re protected wildlife, and moving them isn’t allowed.
Here are gentle, appropriate options for domestic ducks who’ve wandered to a park:
🦆 1. Use what they know [etc.]
Hmm, but the verse here seems to suggest that you owned the ducks and they just happened to escape into the park whereas the prose implies they are random ducks.
Wrt your “don’t understand” react, my point being information sources that are contentful, high-density and have high KL-divergence wrt your existing sources of information are rare. And of course one reason they might be high-divergence is that they are in a distinct social cluster with distinct norms, possible even tribally opposed to your usual information sources. IMO most rationalists do not read widely enough in this sense.
Well yes of course only pay attention if they have something to say worth listening to. But imo this is pretty rare so it’s worth putting up with some rudeness to get.
I think the twitter user teortaxestex can often be as annoying as an openly sneering internet troll but is a good source of information. There’s some people like this in the prediction market space too.
The classic methods of defining numbers are “wrong” in the sense that it doesn’t match how people actually think about numbers
Peano Arithmetic and ZFC pretty much do define addition and multiplication recursively in terms of successor and addition, respectively.
The rest of your brain and psychology evolved under the assumption that you would have a functioning oxytocin receptor, so I think there’s an a priori case that it would be beneficial for you if it worked properly(yes, evolution’s goals are not identical to your own, still though.....)
Isn’t a version of this logic kinda implicit in what people are already doing? Like the MIRI switch to outreach could be seen as trying to make arguments already understood in the AI safety community legible to the wider public. Or put another way, legibility is a two-place word, and the degree of “legibility of AI concerns” present in the xrisk-adjacent community is already sufficient to imply that we shouldn’t be building AI given our current level of knowledge. Like if the median voter had the degree of legible-understanding-of-AI-xrisk of Dario(probably, behind closed doors at least? or even Sam Altman?), civilization probably wouldn’t permit people to try building AGI. The issue is that the general public, as well as powerful decision makers, don’t even have this degree of legible understanding, so the bottleneck is convincing them.
suggest over 50% of American women filter out man below 6 feet in dating apps/sites
That is not what the linked graph shows. It shows that of the women who set height filters over 50% set a filter greater than or equal to 6 feet.
Lazy ecopsych explanation: maybe peoples’s sense of the obviousness of signals is calibrated for a small social environment where everyone knows everyone else really well?
Elaborate on what you see as the main determining features making a future go extremely well VS okay? And what interventions are tractable?
I think in terms of wealth, it’s just because there’s a lot more of them to start with
Ah yes, but why is that the case in the first place? Surely it’s due to the evolutionary processes that make some cognitive styles more widespread than others. But yeah I think it’s also plausible that there is net selection pressure for this and there just hasn’t been enough time(probably the selection processes are changing a lot due to technological progress as well...)
I think the results (both intellectual and financial) speak for themselves?
I mean, it still seems to be the case that people with a less philosophical style control vastly more resources/influence, and are currently using them to take what are from your perspective insanely reckless gambles on AGI, no? I’m saying from an ecological perspective this is due to those cognitive styles being more useful/selected-for[well, or maybe they’re just “easier” to come up with and not strongly selected against] on more common “mundane” problems where less philosophical reflection is needed(abstractly, because those problems have more relevant “training data” available)
Another thing which I wasn’t sure how to fit in with the above. I framed the neglect of your “philosophizing” cognitive style as being an error on the world’s part, but in some cases I think this style might ultimately be worse at getting things done, even on its own terms.
Like with UDT or metaphilosophy my reaction is “yes we have now reached a logical terminus of the philosophizing process, it’s not clear how to make further progress, so we should go back and engage with the details of the world in the hope that some of them illuminate our philosophical questions”. As a historical example, consider that probability theory and computability theory arose from practical engagement with games of chance and calculations, but they seem to be pretty relevant to philosophical questions(well, to certain schools of thought anyway). More progress was perhaps made in this way than could’ve been made by people just trying to do philosophy on its own.
Just spitballing here, but one thing that strikes me about a lot of your ideas is that they seem correct but impractical. So for example, yes it seems to be the case that a rational civilization would implement a long pause on AI, in a sense that’s even “obvious”, but in practice, it’s going to be very hard to convince people to do that. Or yes, in theory it might be optimal to calculate the effect of all your decisions on all possible Turing machines according to your mathematical intuition modules, but in practice that’s going to be very difficult to implement. Or yes, in theory we can see that money/the state are merely an arbitrary fixed-point in what things people have agreed to consider valuable, but it’s gonna be hard to get people to adopt a new such fixed-point.
So the question arises, why are there few people with a similar bent towards such topics? Well, because such speculations are not in practice rewarded, because they are impractical! Of course, you can sometimes get large rewards from being right about one of these, e.g. bitcoin. But it seems like you captured a lot less of the value from that than you could’ve, such that the amount of resources controlled by people with your cognitive style remains small. Perhaps because getting the rewards from one of those large sparse payoffs still depends on a lot of practical details and luck.
Yet another way of formulating this idea might be that the theoretically optimal inference algorithm is a simplicity prior, but in practice that’s impossible to implement, so people instead use approximations. In reality most problems we encounter have a lot of hard-to-compress detail, but there is a correspondingly large amount of “data” available(learned through other people/culture perhaps) so the optimal approximation ends up being something like interpolation from a large database of examples. But that ends up performing poorly on problems where there the amount of data is relatively sparse(but for which there may be large payoffs).
So this then raises the question of how cognitive styles that depend on large but sparse rewards can defend/justify themselves to styles that benefit from many small consistent rewards.
Might be worth more explicitly noting in the post that P_sol and P_ap in fact define the same semimeasure over strings(up to a multiplicative factor) From a skim I was confused about this point “wait, is he saying that not only are alt-complexity and K-complexity different, but even define different probability distributions? That seems to contradict the universality of P_sol, doesn’t it....?”