interactive system design http://aboutmako.makopool.com
I’m generally a fan of pursuing this sort of moral realism of the ideals, but I want to point out one very hazardous amoral hole in the world that I don’t think it will ever be able to bridge over for us, lest anyone assume otherwise, and fall into the hole by being lax and building unaligned AGI because they think it will be kinder than it will.(I don’t say this lightly: Confidently assuming kindness that we wont get as a result of overextended faith in moral realism, and thus taking on catastrophically bad alignment strategies, is a pattern I see shockingly often in abstract thinkers I have known. It’s a very real thing.)
There seems to be a rule that inevitable power differentials actually have to be allowed to play out.
It only seems to apply to inevitable power differentials, it’s interesting that it doesn’t seem to apply to situations where power differentials emerge due to happenstance (for instance, differentials favoring whichever tribe unknowingly took residence in up on copper-rich geographies before anyone knew about smelting). In those situations, FDT agents might choose to essentially redistribute, to consummate an old insurance policy against ending up on the bad end of colonization, to swap land, to send metal tools, to generally treat the less fortunate tribes equitably, to share their power. They certainly will if their utility function gives diminishing returns to power, and wealth in humans often seems to be that way, maybe relating to benefits from trade or something, (when the utility function gives increasing returns, on the other hand… well, let’s not talk about that.)
But the insurance policy can’t apply in every situation, consider: it seems obviously wrong to extend moral equity to, for example, a hypothetical or fictional species that can’t possibly emerge naturally, which you’d then have to abiogenerate.And this seems to apply to descendents of non-fictional extinct species too. You have to accept that the species who evolved to strongly select themselves for, say, over-exploiting their environment to irrecoverable degrees, and starved, even if they existed, their descendants now don’t, and couldn’t have, so you don’t owe them anything now.
It’s obvious with chosen differentials (true neartermists, for instance, choose to continuously sell their power, because power over the future is less valuable to them than flourishing in the present). But I don’t really know how to draw the line as crisply as we need it, between accidental and inevitable differentials.I’ll keep thinking about it.
Hmm makes sense if you really don’t care about energy. But how much energy will they need, in the end, to reorganize all of that matter?
I don’t think there’s going to be a tradeoff between expansion and transcension for most agents within each civ, or most civs (let alone all agents in almost all civs). If transcension increases the value of any given patch of space by s^t, and you get more space from expansion at s*t^3, well, the two policies are nonexpansion: 2tc vs expansion: 2tt3 :/ there’s no contest.If it’s not value per region of space, if one quantity became negligible relative to the other, that value of expansion is still bigger than the cost of building one self-replicating expansion probe (which is even more negligible), so they do that.
So the EV of continuing spacial expansion is still positive. Unless you can argue that the countervailing value of leaving the stars fallow grows in proportion to the transcension in some way. It sorta looks that way with humans (some sort of moral term resembling diminishing gains on resources, and a love of history and its artifacts (fallow planets) that grows with population size?), but it could go either way.
In situations like that, I’d say, more.. you should process it with reduced energy, in correct proportion. I wouldn’t say you should completely deafen yourself to anyone (unless it’s literally a misaligned AIXI).
I think even this slackened phrasing is not applicable to the current situation, because the people I’m primarily listening to are mostly just ordinary navy staff who are pretty clearly not wired up to any grand disinformation apparatus about UAP.
and we are either completely left alone or have been put in a simulation, in which case occasional UFO sightings don’t seem like an optimal feature of the outcome.
Agreed. A way of using our matter (the earth) for something else, without killing us.
So I’ve been thinking about that. For any simulator, there are things they do and don’t care about capturing accurately in the simulation. I’d guess that the simulation has a lot to do with whether we hold to the reciprocal kind-colonization pacts that they’re committed to themselves. For that, it’s important that the “we” is preserved, we have to be allowed to develop without any major interventions, so that we self-actualize, so that the thing being tested is really us.There may be interventions that they can make that wouldn’t interfere with the integrity of the test, that would actually enhance its accuracy, reduce the random noise of a perfectly natural history.
I don’t have an idea of what that would look like. It’s not obvious to me that it would look like UFOs. I can tell a story where it does, but it’s weak so far: To induce us to consider the possibility that our neighbors are already here, without confirming it (or even, before disconfirming it later on (“ah, turns out it was all a ridiculously implausible propulsion research program all along, nothing to see here”)).And maybe that leads us to… no. I shouldn’t go on, today. I just don’t understand what this noise-reduction means and how it should work. I need to think about that more.
Is there writing about that? Last time I thought deeply about reversible computing, it didn’t seem like it was going to be useful for really anything that we care about.
I’ll put it this way.. if you look at almost any subroutine in a real program, it consists of taking a large set of inputs and reducing them to a smaller output. In a reversible computer, iirc, the outputs have to be as big as the inputs, informationally (yeah that sounds about right). So you have to be throwing out a whole lot of useless outputs to keep the info balanced, that’s what you have to do to maintain reversibility, but that’s not really different to producing entropy. I expect life and life-like patterns to have that quality, as computations. Life, by nature, is inextricable from time, but the most precise reductions of the forward motion of time is that it consists of the increase of entropy, or something like that.
Even if stars only make up a small fraction of the matter in the universe, it’s still matter, they’d still probably have something they’d prefer to do with it than this. I’m not really sure what kind of value system (that’s also power-seeking enough to exert control over a broad chunk of the universe) could justify leaving it fallow.
I will politely decline to undergo epistemic learned helplessness as it seems transparently antithetical to the project of epistemic rationality
Even if it were true, how would they know it was a propulsion technology?
Uh, because there seemed to be a solid object (showed up in a kind of radar that we don’t know how to spoof) that was moving around really fast in line with the visual. As stated, I still think it might not be a propulsion technology, but the witnesses don’t tend to float any other possibility. I haven’t seen them asked about the plasma image theory.
I wouldn’t say I think that it’s an alcubierre drive specifically, what I mean is I don’t know what else to liken it to and it would seem to share a lot of qualities.
I agree that the videos are not really very interesting at all. The recordings (radars) of the most interesting parts of the encounter were not released. A pilot, Fravor, says they were confiscated (this would be consistent with the US plasma image tech). (I think former AATIP (the previous UFO reporting program) lead, Luis Elizondo, sort touches on the way they only released the crappy videos, though I don’t think he really explained it at all, in this conversation with skeptic Mick West.)
It’s squarely relevant to the post, but it is mostly irrelevant to Eliezer’s comment specifically, and I think the actual drives underlying the decision to make it a reply to Eliezer are probably not in good faith, like, you have to at least entertain the hypothesis that they pretty much realized it wasn’t relevant and they just wanted eliezer’s attention or they wanted the prominence of being a reply to his comment.Personally I hope they receive eliezer’s attention, but piggybacking messes up the reply structure and makes it harder to navigate discussions, to make sense of the pragmatics or find what you’re looking for, which is pretty harmful. I don’t think we should have a lot of patience for that.
(Eliezer/that paragraph he was quoting was about the actions of large states, or of a large international alliance. The reply is pretty much entirely about why it’s impractical to hide your activities from your host state, which is all inapplicable to scenarios where you are/have a state.)
This is an inappropriate place to put this.
Compute is physically simpler than life. Where there is life, there is necessarily also compute. Where there is compute, there isn’t necessarily also life.
Good, and cheap, is the thing. If we didn’t have silicon computing, we would still have vacuum tubes, we’d still have computers. But as I understand it, vacuum tubes sucked, so I wouldn’t expect that that machine learning would be moving so quickly at this point.
If that were the case, there’d be more measure in the next year than in the next second, but you don’t suddenly find yourself a year from now. (right?)
I think you’re imagining the decay running in the wrong direction. I suppose you could define it that way. It seems less natural.
But you can ask a similar question… should I expect to ‘find myself in the previous year’ in some sense. Well I could. If there were some “I” hopping between every observer-moment in existence (this is a fairly common form of super-utilitarianism), it wouldn’t be perceptible, I wouldn’t remember ever having been elsewhere, our memories are all just properties of whatever vessel we currently occupy.
I’d phrase it more as… if you observe that you’re a human, there’s a prior on finding that you’re in the earliest year (or the earliest cosmological reproductive cycle) in which a lot of humans exist. You could be in a later year, but until you can confirm that with evidence, you consider it less likely.
But that has to trade off against the fact that the number of universes (and so the number of humans) keeps ballooning over time (or even outside of time), and I don’t really know how to navigate that, could be that you should expect to be in the latest possible universe, because the measure increases from branching outweigh the measure losses from time discounting.
Private information is evil. (Though I’m still on the fence as to whether it’s a necessary evil to avoid world-sized preference falsification cascades.)
Clippy is not ideal, but better than humanity.
There’s a weird genre of paranoia where people worry that the thing we value will turn out to be something we disvalue. But I guess you mean it’s a case where the values of the average LWer disagree sharply from the values of the globe, right. (I don’t see that, personally.)
I’m bullish on radical transparency at this point. Whoever is the most unrelentingly brash will seize the next moral aesthetics cycle.
Regarding moving beyond blame minimization, I think it’s worth mentioning my Venture Granters, a system for protecting sane risk-takers in public funding institutions: https://www.lesswrong.com/posts/NY9nfKQwejaghEExh/venture-granters-the-vcs-of-public-goods-incentivizing-good
Research that makes the case for AGI x-risk clearer
I ended up going into detail on this, in the process of making an entry to the FLI’s aspirational worldbuilding contest. So, it’ll be posted in full about a month from now. But for now, I’ll summarize:
We should prepare stuff in advance for identifying and directly manipulating the components of an AGI that engage in ruminative thought. This should be possible, there are certain structures of questions and answers that will reliably emerge, “what is the big blank blue thing at the top of the image” “it’s probably the sky”, and such. We wont know how to read or speak its mentalese, at first, but we will be able to learn it by looking for known claims and going from there.
Once we have AGI, we should use this stuff to query the AGI’s own internal beliefs about whether certain catastrophic outcomes would come about, under the condition that it had been given internet access.
If the queries return true, then we have clear evidence of the presence of immense danger. We have a Demonstration of Cataclysmic Trajectory. This is going to be much more likely to get the world to take notice and react, than the loads of abstract reasoning about fundamental patterns of rational agency or whatever, that we’ve offered them so far. (Normal people don’t trust abstract reasoning, and they mostly shouldn’t! It’s tricksy!)
From there, national funding for a global collaboration for alignment, and a means to convince security-minded parts of the government to implement the pretty tough global security policies required, so that the alignment project will no longer need to solve the problem in 5 years, and can instead take, say, 30.
(And then we solve the symbol grounding problem, and then we figure out value learning, and then we learn how best to aggregate the learned values, and then we’ll have solved the alignment problem)
Rationalists should be deeply interested in the Princeton-Nimitz encounters, regardless of whether it was confusion, aliens, or a secret human technology, because cases of confusion on this level teach us a lot about how epistemic networks operate, and if it were aliens or a secret human technology that would be strategically significant.
So, since those were pretty much the only possibilities, I was deeply interested.
I eventually settled loosely into the theory that the tictacs were probably a test of a long-range plasma volumetric display decoy/spoofing thing. More from David Brin. I did get the impression that the higher ups on these ships were consistently, sharply less curious about the UAPs than the rest of the crew: Perhaps they’d been warned in advance.There are a few loose threads, though:
We don’t know of a way of spoofing those sorts of radars.
Obama would seem to be lying, in saying, of them, “We don’t know exactly what they are”. He could just be lying by omission, though. It’s conceivable to me that when a president starts to realize it’s probably a secret US technology they will generally pull back from their investigation, lose curiosity, and choose to stay as ignorant as possible, knowing that, if they knew, they’d be kind of obligated to tell people, and that would just slightly weaken the US, and potentially increase the distance between military and public representatives which wouldn’t be healthy.
Earlier presidents seemed more interested than Biden is, in getting to the bottom of these things and telling the public, but it’s conceivable that they didn’t have the decoy thing working during Clinton’s term so there weren’t any actual US UFO techs to report.
There are a couple of little details in the report that don’t line up with this theory (for instance, tictacs having detailed parts on the bottom, or seeming to be clearly physical objects? (though note, Voorhis reported them having a glow to them, at night (I’d guess that they were the glow, and that it was only non-obvious that they were glowing during the day because the sun was bright enough that they could be read as reflective white objects instead), and they were hot, on the FLIR)), but I’d expect a certain number of details in any report of a mysterious phenomenon to be confabulations, due to the fact that whenever a person sees anything, they see it through their interpretations of what they think it is, they don’t just give you raw images, that’s not how human sensation or memory or language works. If you want to find a novel (more correct) interpretation of any phenomenon, you have to be prepared to disregard some of the details as confabulations that people made up and perpetuated as a result of seeing everything through the previous interpretation.
I initially agreed that aliens would not look like this. Then Robin Hanson wrote a series of rationalization stories about why an alien civilization might look like this, which has bamboozled me. (In short, his theory was: They’re an extremely centralized, conservative civilization who evolved recently, and nearby, relative to us, due to being siblings of the same panspermia event. They give their visiting parties only limited agency to execute the simplest possible plan that would gradually convince us to look up to them and become like them. (While still allowing us enough doubt and agency that our choice would be meaningful?))
The princeton-nimitz reports are unambiguously worth the oxygen it takes to contemplate them, given the consistency of the reports and the ramifications it would have even if it was “just” a human technology. So if you had the virtue of curiosity, you would contemplate it, and you would get led down the path that ends with the resolution that the “lie”, “mistake”, or “human technology” theories don’t really make deep sense either, and a rationalist does indeed have to start considering the other theory, that some aliens end up being much stranger than we would expect.
(But the path doesn’t really end there. It visits. And then, for me the path ended roughly with; it was probably a test of a pretty novel, surprising, but ultimately probably geopolitically unexciting human technology.)