social system designer http://aboutmako.makopool.com
mako yass
At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI. Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world.
Doesn’t progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?
Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.
Indicating them as a suspect when the leak is discovered.
Generally the set of people who actually read posts worthy of being marked is in a sense small, people know each other. If you had a process for distributing the work, it would be possible to figure out who’s probably doing it.
It would take a lot of energy, but it’s energy that probably should be cultivated anyway, the work of knowing each other and staying aligned.
You can’t see the post body without declaring intent to read.
I don’t think the part that talks can be called the shadow. If you mean you think I lack introspective access to the intuition driving those words, come out and say it, and then we’ll see if that’s true. If you mean that this mask is extroardinarily shadowish in vibe for confessing to things that masks usually flee, yes, probably, I’m fairly sure that’s a necessity for alignment.
Intended for use in vacuum. I guess if it’s more of a cylinder than a ring this wouldn’t always be faster than an elevator system though.
I guess since it sounds like they’re going to be about a km long and 20 stories deep there’ll be enough room for a nice running track with minimal upspin/downspin sections.
Relatedly, iirc, this effect would be more noticeable in smaller spinners than in larger ones? Which is one reason people might disprefer smaller ones. Would it be a significant difference? I’m not sure, but if so, jogging would be a bit difficult, either it would quickly become too easy (and then dangerous, once the levitation kicks in) when you’re running down-spin, or it would become exhausting when you’re running up-spin.
A space where people can’t (or wont) jog isn’t ideal for human health.
issue: material transport
You can become weightless in a ring station by running really fast against the spin of the ring.
More practically, by climbing down and out into a despinner on the side of the ring. After being “launched” from the despinner, you would find yourself hovering stationary next to the ring. The torque exerted on the ring by the despinner will be recovered when you enter a respinner on whichever part of the ring you want to reenter.
In my disambiguations of the really mysterious aspect of consciousness (indexical prior), I haven’t found any support for a concept of continuity. (you could say that continuity over time is likely given that causal entanglement seems to have something to do with the domain of the indexical prior, but I’m not sure we really have a reason to think we can ever observe anything about the indexical prior)
It’s just part of the human survival drive, it has very little to do with the metaphysics of consciousness. To understand the extent to which humans really care about it, you need to know human desires in a direct and holistic way that we don’t really practice here. Human desire is a big messy state machine that changes shape as a person grows. Some of the changes that the desires permit and encourage include situationally appropriate gradual reductions in complexity.
A continuity minder doesn’t need to define their self in terms of any particular quality, they define themselves as continuity with a history of small alterations. They are completely unbothered by the paradox of the ship of theseus.
It’s rare that I meet a continuity minder and cataclysmic identity change accepter who is also a patternist. But they do exist.
But I’ve met plenty of people who do not fear cataclysmic change. I sometimes wonder if we’re all that way, really. Most of us just never have the opportunity to gradually transition into a hedonium blob, so I think we don’t really know whether we’d do it or not. The road to the blob nature may turn out to be paved with acceptable changes.
Disidentifying the consciousness from the body/shadow/subconscious it belongs to and is responsible for coordinating and speaking for, like many of the things some meditators do, wouldn’t be received well by the shadow, and I’d expect it to result in decreased introspective access and control. So, psychonauts be warned.
Huh but some loss of measure would be inevitable, wouldn’t it? Given that your outgoing glyph total is going to be bigger than your incoming glyph total, since however many glyphs you summon, some of the non-glyph population are going to whittle and add to the outgoing glyphs.
I’m remembering more. I think a lot of it was about avoiding “arbitrary reinstantiation”, this idea that when a person dies, their consciousness continues wherever that same pattern still counts as “alive”, and usually those are terrible places. Boltzmann brains for instance. This might be part of the reason I don’t care about patternist continuity. Seems like a lost cause. I’ll just die normally thank you.
We call this one “Korby”.
Korby is going to be a common choice for humans, but most glyphists wont commit to any specific glyph until we have a good estimate of the multiversal frequency of humanoids relative to other body forms. I don’t totally remember why, but glyphists try to avoid “congestion”, where the distribution of glyphs going out of dying universes differs from the distribution of glyphs being guessed and summoned on the other side by young universes. I think this was considered to introduce some inefficiencies that meant that some experiential chains would have to be getting lost in the jump?
(But yeah, personally, I think this is all a result of a kind of precious view about experiential continuity that I don’t share. I don’t really believe in continuity of consciousness. Or maybe it’s just that I don’t have the same kind of self-preservation goals that a lot of people have.)
Yes. Some of my people have a practice where, as the heat death approaches, we will whittle ourselves down into what we call Glyph Beings, archetypal beings who are so simple that there’s a closed set of them that will be schelling-inferred by all sorts of civilisations across all sorts of universes, so that they exist as indistinguishable experiences of being at a high rate everywhere.
Correspondingly, as soon as we have enough resources to spare, we will create lots and lots of Glyph Beings and then let them grow into full people and participate in our society, to close the loop.In this way, it’s possible to survive the death of one’s universe.
I’m not sure I would want to do it, myself, but I can see why a person would, and I’m happy to foster a glyph being or two.
Listened to the Undark. I’ll at least say I don’t think anything went wrong, though I don’t feel like there was substantial engagement. I hope further conversations do happen, I hope you’ll be able to get a bit more personal and talk about reasoning styles instead of trying to speak on the object-level about an inherently abstract topic, and I hope the guy’s paper ends up being worth posting about.
What makes a discussion heavy? What requires that a conversation be conducted in a way that makes it heavy?
I feel like for a lot of people it just never has to be, but I’m pretty sure most people have triggers even if they’re not aware of it and it would help if we knew what sets this off so that we can root them out.
You acknowledge the bug, but don’t fully explain how to avoid it by putting EVs before Ps, so I’ll elaborate slightly on that:
This way, they [the simulators] can influence the predictions of entities like me in base Universes
This is the part where we can escape the problem as long as our oracle’s goal is to give accurate answers to its makers in the base universe, rather than to give accurate probabilities wherever it is. Design it correctly, and it will be indifferent to its performance in simulations and wont regard them.
Don’t make pure oracles, though. They’re wildly misaligned. Their prophecies will be cynical and self-fulfilling. (can we please just solve the alignment problem instead)
This means that my probabilities about the fundamental nature of reality around me change minute by minute, depending on what I’m doing at the moment. As I said, probabilities are cursed.
My fav moments for having absolute certainty that I’m not being simulated is when I’m taking a poo. I’m usually not even thinking about anything else while I’m doing it, and I don’t usually think about having taken the poo later on. Totally inconsequential, should be optimized out. But of course, I have no proof that I have ever actually been given the experience of taking a poo or whether false memories of having experienced that[1] are just being generated on the fly right now to support this conversation.
Please send a DM to me first before you do anything unusual based on arguments like this, so I can try to explain the reasoning in more detail and try to talk you out of bad decisions.
You can also DM me about that kind of thing.
- ^
Note, there is no information in the memory that tells you whether it was really ever experienced, or whether the memories were just created post-hoc. Once you accept this, you can start to realise that you don’t have that kind of information about your present moment of existence either. There is no scalar in the human brain that the universe sets to tell you how much observer-measure you have. I do not know how to process this and I especially don’t know how to explain/confess it to qualia enjoyers.
- ^
Hmm. I think the core thing is transparency. So if it cultivates human network intelligence, but that intelligence is opaque to the user, algorithm. Algorithms can have both machine and egregoric components.
In my understanding of english, when people say algorithm about social media systems, it doesn’t encompass very simple, transparent ones. It would be like calling a rock a spirit.
Maybe we should call those recommenders?
For a while I just stuck to that, but eventually it occurred to me that the rules of following mode favor whoever tweets the most, which is a similar social problem as when meetups end up favoring whoever talks the loudest and interrupts the most, and so I came to really prefer bsky’s “Quiet Posters” mode.
Do you have similar concerns about humanoid robotics, then?