R&Ds human systems http://aboutmako.makopool.com
mako yass
Yeah, it’s important to find ways to do that stuff. Every community I’ve visited where “ad hominem” — which they take to mean never addressing the people here, their patterns of behavior or their psychologies — is strictly enforced, has been deeply cursed.
If you can’t ever psychologize each other, then you aren’t a community. These things are literally incompatible. A community is a place where people know each other and care about each other, and care about the community as a whole. When people know each other, they notice let’s say opportunities for personal improvement in each other and if they care about each other then they will point those out, and if they care about the community as a whole they will often have to do it publicly.
The solution lies in the burden of proof. As a rule, absent a legal presumption, the burden falls on the party making the positive, existential claim.
The principle of burden of proof (in this context at least) is just wrong. It will lead you to confidently behave as if a bunch of things don’t exist that probably do. You don’t need to be disproportionately dismissive towards positive claims. You’re allowed to maintain uncertainty about things, you can still live, it isn’t paralysing.
the positive claim asks us to accept a description that contains more information (longer k-description)
This is true if by “description” you mean “description of the current state of the world” but false if you mean “description of the laws of physics that generated this state of the world (and others)”. Simple rules can generate complex outputs. And it’s an easy conjecture that any rule that could generate life would generate many complex and surprising outputs.
And when talking about razors for epistemology, you should mean the latter. There was never actually any scientific merit to applying a simplicity heuristic to the outputs of the generating function (the physics), the razor should only be applied to the generating function itself. That is the only time and place for measuring k-descriptions, doing it anywhere else leads you to “black holes aren’t real (until you show me one)” type shit.
Simulationism is an incredibly straightforward implication of the laws of economics and technology that we already know. In order to reject it, you have to add another rule to your physics approximating “nothing too weird and hidden is allowed to happen”.
Seems to me like the intended meaning of that title was different when Le Guin started writing, and then things went in a different direction along the way, and Le Guin just accepted where it ended up.
My guess is it was initially intended to refer to those who wont accept the possibility of utopia, those who cynically turn away before it’s built, or those idealists who will swarm and rupture any near-perfect thing by picking and tearing at its smallest flaws, and as Tobias points out, much of it seems to have been written that way.
Then I think it might’ve gotten lost partway through, in the way a dream does, a contrivance is introduced to make a broader point, the contrivance grabs the attention of the dreamer to such an extent that the broader point gets forgotten, and “the ones who walk away from omelas” ended up walking away entirely in response to the contrivance.
So Le Guin faced a choice, she could have noticed that and trashed it and started over until she had something perfectly intentional, but successful authors generally don’t do that ime. They publish in large quantity and they publish entertaining rides rather than coherent parables. If the contrivance was engaging enough to obscure the broader point, then a successful author lets the work become about the contrivance.
In fact, the simulation hypothesis can be regarded as a form of techno‑theology (as David Chalmers and others have noted).
It is, but it’s the most lucid theology to have ever been done, as it includes a more detailed characterisation of what advanced species would actually be like which we were only able to get by being closer to being one.
it runs afoul of Occam’s razor
No, or at least not if you have the good version of occam’s razor (solomonoff induction). It’s an implication of the simplest possible hypothesis.
Saying that it violates occam’s razor is like saying that black holes violates occam’s razor, as if cosmology would be simpler if we presumed that they don’t exist, but really the simplest plausible model of cosmology implies the existence of black holes, even if you’ve never seen one yourself, you should be inclined to believe in them as an implication of the existence of gravity.falsifiable
But nor is the negation falsifiable, so to disbelieve it is at least as foolish as believing it. We are condemned to uncertainty and so we must become serious about guessing.
merely displacing the problem rather than resolving it
The simulation hypothesis has never been presented as an approach to explaining existence. It has been exclusively discussed by atheists who are too squeamish to acknowledge that there might be problems it solves, because that would elevate it to the level of not just theology but religion, but I concur with them, there is no need to acknowledge these things, today.
Regarding signs,
Fermi paradox
true, but
quantum indeterminacy and the observer effect, universal fine-tuning, Planck length/time, speed of light, holographic principle, information paradox in black holes, etc.
this line of argument, that, like, physics looks like it’s been designed to be efficiently computable, is invalid afaik, lots of stuff actually isn’t discrete, and the time-complexity class of the laws of physics are afaik exponential on particle count.
Presumably, that which once identified as human would “wake up” and realize its true identity.
Why would it ever forget, why would this be sleep? That’s a human thing. If there’s some deep computational engineering reason that a mind will always be better at simulating if it enters a hallucinatory condition, then indeed that would be a good reason for it to use a distinct and dissimilar kind of computer to run the simulation, because a lucid awareness of why the simulation is being run, remembering precisely what question is being pursued, and the engagement of intelligent compression methods, is necessary to make the approximation of the physics as accurate*efficient as possible.
Though if it did this, I’d expect the control interface between the ASI and the dream substrate to be thicker than the interfaces humans have with their machines, to the extent that it doesn’t quite make sense to call them separate minds.
Sadly, I checked, “anthropic sabotage” does not refer to a hot new metaphysical exploit, they just mean sabotaging Anthropic (the company)
Yeah there seems to be a common idea that talking about taste is impossible, it’s not, and it’s the most important thing in the world for writers to do, it’s a critical part of learning to write for other people, instead of just writing for yourself.
One should never refer to as ‘consciousness’ because of how strong the conflation of those things are under the common sense of the word, and I believe that there isn’t a need to salvage ‘consciousness’, we have better words for this exclusion, you can call it observer-measure, indexical prior, subjectivity, or experiencingness. Philosophers will recognise ‘subjectivity’ or ‘experiencingness’, but the other names are new, they are a result of the bayesian school being much better at thinking about this kind of thing than academic philosophers have been to the extent that I don’t see a reason to sacrifice clarity to stay in dialog with them.
The word consciousness inherently conflates metacognition and capacity for qualia. It is a word that refers to the combination of those things. Similar to the way “AGI” (at this point) carries a presumption of a link between human-level cognition and recursive self-improvement, only unlike that case there was never actually a basis for thinking that the components of consciousness would be intrinsically linked. Systems that can experience qualia but can’t tell us about it have been assumed to lack qualia for no actual reason.
When you’re trying to talk to anyone about consciousness, this is usually the first thing you have to work on.
A lot of the time when you’re using an AI for research assistance it’ll fail to do a web search and you’ll get mad at it because you know it knows that this wasn’t in the dataset, it knows this isn’t the kind of question that can be answered based on vibes, it declines to do a web search because it’s assuming you wont catch that and it’s trying to save the company money.
This morning as I was waking up I got mad at a piece of my brain for declining to do a web search.
I couldn’t easily dismiss the feeling. And I entreated to the feeling “it is perhaps unreasonable to expect a lump of meat to do a web search, as it doesn’t have an internet connection, it never has” and the feeling said “Well it should! It’s 2025!”
So I guess I’ve started to genuinely feel the ache of not having computer telepathy. And I feel like the feeling might actually be right about this. Until I can do web searches while I’m dreaming I will in a sense lack the full virtue of empiricism. I will not be able to access real data during my daily self-guided post-training.
Do you have an operational definition of what properties you think makes you “panpsychist” rather than “non-psychist”?
Hmm, well. Maybe this is what you’re looking for: (I’m opposed to calling it nonpsychism because it doesn’t actually refute experience, but) I do not believe that one can perceive one’s own experiential measure.
One can make reports about what one’s experience would consist of, but one can’t actually report how much experience (if any) there is. There is no way to measure that thing for the same reason there’s no way to know the fundamental substrate of reality, because in fact it’s the same question, it’s the question of which patterns directly exist (aren’t just interpretations being projected onto reality as models).One very concrete operationalization of my UD-Panpsychism is that I think the anthropic prior just is the universal distribution. If you put me in a mirror chamber situation I would literally just compute my P(I am brain A rather than brain B) by taking the inverse K of translators from possible underlying encodings of physics to my experiential stream (idk if anyone’s talked about that method before but I’m getting an intuitive sense that if you’re conditioning on a particular type of physics then that’s a way of getting measure that’s slightly closer to feasible than just directly solomonoffing over all possible observation streams)
I use it not because I think the UDP measure is ‘correct’, but because it is minimal, and on inspection it turns out there’s no justification for adding any additional assumptions about how experience works, it’s just a formal definition of a humble prior.
I can have a strong intuition that consciousness, as I experience it, is probably a function of complexity and specific configurations of storage and processing, which humans have much more
It’s kinda wonderful to hear you articulate that. I used to have this intuition and I just don’t at all right now. I see it as a symptom of this begged belief that humans have been perceiving that they have higher experiential measure than other things do, lots of humans think they’re directly observing that, but that is a thing that by its nature cannot be seen, and isn’t being seen, and once you internalise that you no longer need to look for explanations of why the human brain might be especially predisposed to generate or catch experiencingness more than other systems, because there’s just no reason to think it is.
why [...] do brains in particular seem to have a lot of it?
We don’t actually have much (or any) evidence that they do. That is not the kind of thing that can be observed. (this is the main bitter bullet that has to be bitten to resolve the paradox)
I think magnitude of pleasure and pain in a system is going to be defined as experiential measure of the substrate times some basically arbitrary behaviourist criterion which depends on what uplifted humans want to empathise with or not, which might be weirdly expansive or complicated and narrow depending on how the uplifting goes.
Experiencingness doesn’t make them say that, it also isn’t the thing that’s making you say that. Everything that’s making you say you’re conscious is just about the behaviors of the material, while the magnitude of the experience, or the prior of being you, is somewhat orthogonal to the behaviour of the material.
You probably shouldn’t be asking me about “consciousness” when I already indicated that I don’t think it’s a coherent term and never used it myself.
I’ll just publicly declare that I’m a panpsychist. I feel that panpsychism doesn’t really need to be explicitly argued for. As soon as it’s placed on the table you’ll have to start interrogating your reasons for not being one, for thinking that experiential measure/the indexical prior is intrinsically connected to humanlikeness in some way, and you’ll realise there were never really good reasons, it was all sharpshooter fallacy, streetlamp fallacy, the conflation of experience with humanlike language-memory-agency sustained under that torturous word “consciousness”, and you’ll realise that this dualist distinction between existence and experience isn’t doing any work, and one day you’ll wake up and find that instead of maintaining those two measures, you only use one.
And it’s an irresistable position in the aesthetic realm as well, if forced there a panpsychist can start wearing jazzy colors, partying with animists, re-enchanting nature, and accusing their opponents of “anthropocentrism”.
It seems distasteful to be sure. A moral failure. But how bad is it really?
I’m completely over finding stuff like that aesthetically repellent after hearing Flashbots talking about MEV (miner-extractable value (ethereum hosts taking bribes to favour some transactions over others)) (a project to open source information about MEV techniques to enable honest hosts to compete), being overwhelmed by the ugliness of it, then realising like.. preventing people from profiting from information asymmetries is obviously unsolvable in general. The best we can do is reduce the amount of energy that gets wasted on it, and the kind of reflexive regulations people would try to introduce here would be counterproductive, the interventions that work tend to look more like acceptance and openness.
And I think trying to solve it on the morality/social ostracism layer is an example of a counterproductive approach, because that just leads to people continuing to do it but invisibly and incompetently. And I suspect that if it were visible and openly discussed as a normal thing it wouldn’t even manifest in a way that’s harmful. That’s going to be difficult for many to imagine because we’re a long way from having healthy openness about investing today. But at its adulthood I can imagine a culture where politicians are tempered by their experiences in investing into adopting the realist’s should, where their takes about where america should go are forced into alignment with their beliefs about where it can go, which are now being exposed in their investing decisions.
It’s extremely common for US politicians to trade on legislative decisions and I feel like this is a better explanation for corruption than political donations are. Which is important because it’s a stupid and so maybe fragile reason for corruption. The natural tendency of market manipulation is in a sense not to protect incumbents, but to threaten them, because you can make way way more money off of volatility than you can on stasis.
So in theory, there should exist some moderate and agreeable policy intervention that could flip the equilibrium.
I have a strong example for simulationism, but I guess that might not be what you’re looking for. Honestly I’m not sure I know any really important multiversal trade protocols. I think their usefulness is bounded by the generalizability of computation, or the fact that humans don’t seem to want any weird computational properties..? Which isn’t to say that we wont end up doing any of them, just that it’ll be a thing for superintelligences to think about.
In general I’m not sure this require avoiding making your AI CDT to begin with, I think it’ll usually correct its decision theory later on? The transparent newcomb/parfit’s hitchhiker moment where it knows that it’s no longer being examined by a potential trading partner’s simulation/reasoning and can start to cheat never comes. There’s no way for a participant to like, wait for the cloons in the other universe to comply, and then defect, you never see them comply, you’re in different universes, there’s no time-relation between your actions! You know they only comply if (they will figure out) that it is your nature to comply in kind.
I do have one multiversal trade protocol that’s fun to think about though.
You don’t need certainty to do acausal trade.
If it’s finite you don’t know how many entities there are in it, or what proportion of them are going to “trade” with you, and if it’s infinite you don’t know the measure (assuming that you can define a measure you find satisfying).
These are baby problems for baby animals. You develop adequate confidence about these things by running life-history simulations (built with a kind of continual actively reasoning performance optimization process that a human-level org wouldn’t be able to contemplate implementing) or just by surveying the technological species in your lightcone and extrapolating. Crucially, values lock in after the singularity (and probably intelligence and technology converges to the top of the S-curve) so you don’t have to simulate anyone beyond the stage at which they become infeasible large.
Conjecture: Absolute privacy + Absolute ability to selectively reveal any information one has, are theoretically optimal, transparency beyond that wont lead to better negotiation outcomes. Discussion of the privacy/coordination tension has previously missed this, specifically, it has missed the fact that technologies for selectively revealing self-verifying information, such as ZKVMs, suggest that the two are not in tension.
As to what’s a viable path to a more coordinated world in practice, though, who knows.
Depends whether you think the metaverse’s primary output be entertainment or patents, whether you’ll get communities of distraction or whether you’ll get communities of care. We don’t actually know, it’s a question about human nature that has never been tested before.
The market for entertainment goods in the virtual world isn’t as big as meta want you to believe. The best experiences you can (or could) have on the metaverse currently are (were) decidedly high in piracy, people wearing likenesses they don’t own, watching movies together that weren’t licensed for the platform. These experiences were and will remain very janky and rough. It’s very difficult for copyright holders to adapt to a new world, if you look at say, Beatsaber, you can see a great example of licensing deals just failing to be made, so people mod their beatsaber to play unlicensed music, and meta permit this (ambiguously, it’s pretty inconvenient to do it), and I expect that to continue.
And the potential to generate real value in the metaverse (eg, patents, education, computer-mediated engineering work, logistical labor, remote robot operation labor) is higher than you’d think. Virtual reality is more able to support social connection than prior online mediums, so there’s more potential for people getting organised and caring about each other and doing real stuff together and learning from each other, but there’s also more potential for people to feel more socially cut off from industrious people, to develop reduced interest in written content, to consummate in communities of entertainment. It’s possible that increasing social health decreases distractive behaviours, Rat Park style, and once you have actual online community maybe the internet manifests its potential as a place of learning.
And if it does then the metaverse increases wealth by producing innovations that make primary goods cheaper, creating real jobs, and by increasing everyones’ access to training.
So it’s hard to call.