Game Practitioner http://aboutmako.makopool.com
mako yass
I think the question on P(God) is a lot more difficult to answer than the surveyors realize. We’re all cognizant of the possibility of the known universe being a simulation, a machine constructed by some intelligent entity in a higher level universe*. Some of us consider the probability of the scenario Simulation to be high, if not absolute for reasons I wont go into here.
Many of us are define “natural” as “happened in reality”, thus they define “supernatural” as “did not happen”. Those may rightfully assign literal 0 to P(God), that is a statement as sure as the axioms of the logic you formulate it in. The rest of us, though, those of us who believe that all words should by virtue of their existence have a use, have to think a bit more. If “Supernatural” means anything, I’d be surprised if anyone here bears a definition that does not render Simulation equivalent to the scenario God as God was defined in the survey.
To me, “Supernatural”, if you’re going to use it, could only mean “so mysterious as to be beyond being reasoned about or modelled”. The work of the simulator’s hand would definitely qualify as such, and so too would the simulator itself. Lo, a supernatural creator. God is reality.
Incidentally, LW has a preferred local understanding of “supernatural,” which derives from this post.
Ah. Hardly incidental. I wish I’d known about that. I hold that the definition (if it belongs to anyone at all) belongs to those who self-identify as believers in the supernatural, this form feels far more like what I’d expect to find in their heads. Great clumbering atomic concepts that can’t be broken down to the stuff of ordinary reasoning.
Add The Matrix Lords to the list. If you take acausal cooperation seriously, your relationship with them is a complex one.
That’s true of most frequently referenced elements of human nature, if not all of them.
Even Love.
~The Homo Sapiens Class has a trusted computing override that enables it to lock itself into a state of heightened agreeability towards a particular target unit. More to the point: it can signal this shift in modes in a way that is both recognizable to other units, and which the implementation makes very difficult for it to forge. The Love feature then provides HS units on either side of a reciprocated Love signalling a means of safely cooperating in extremely high-stakes PD scenarios without violating their superrationality circumvention architecture.
Hmm.. On reflection, one would hope that most effective designs for time-constrained intelligent(decentralized, replication-obsessed) agents would not override superrationality(“override”: Is it reasonable to talk about it like a natural consequence of intelligence?), and that, then, the love override may not occur.
Hard to say.
It is a wholly inadequate analogy. Player Characters are supposed to be the ones with the agency, right? But most Player Characters are confined to a low-level domain of expertise(not metaethics, communication, social organization or economics, but scavenging and combat), and thus to do any good in the world must defer to someone with more high-level worldview(someone they should rightly trust well enough to tell them where humanity needs them), either that, or they tend to undergo their campaigns in some twisted amusement ride under the thumb of a perverse god(moloch, most likely), where straying from their specialization is simply not on offer.
In short; those who live life like a game, fulfilled and decisive, either must or at least should follow(or advise for) some higher authority who knows how to fit their domain into the broader needs of the species.
The rest of us, those of us more inclined to insatiable curiosity and pensivity, we are not player characters. We are the DMs who designed the game to keep its Player Characters happy in doing good.
You have not explained why #1 is necessarily life-alteringly bad.
By “advice in the comments”, you mean new entries to the repositories, right? So you’re suggesting that we fragment the repository through a number of separate comment sections, labeled by year, and that is a really awful way of organizing a global repository of timeless articles.
If you’re worried about incumbents taking disproportionate precedence in the list(as more salient posts tend to get more attention; more votes; more salience), IIRC, reddits have a comment ordering that’s designed to promote posts on merit rather than seniority. If that isn’t sufficient to address incumbent bias then we should probably be talking about building a better one.
Any being that does not at some point consider the possibility that it is inside a simulation, is not worth simulating.
/r/IAMA will also teach you not to trust an elaborate story just because you can’t think of any reason someone would lie about it. Turns out people still lie about it. Perhaps they have their reasons, which we’ll never know, or perhaps they’re just creative writing students looking to test their skills.
Either way, there are reasons proof of identity is now required for every AMA. A long, storied history of reasons.
Is the proliferation of a policy of not checking main a problem? Shouldn’t we do something about it? Something like posting extremely relevant articles to main?
If dunbar’s limit is the issue, if you feel the community is too large to feel like a community, or if you feel that the community has become a granfalloon, not as united as it might like to think, there’s a fairly straightforward solution. Controlled mitosis. Demarcate houses, build ourselves a sorting hat(possibly in the form of a quiz although asking meaningful questions in these things is much harder than you might imagine), send people on to their respective common rooms and tell them to make friends with everyone.
It would be deleterious to demarcate the houses with ideological boundaries. Better that they’re drawn along practical specializations to aid in developing rationalist outlooks in specific fields of praxis.
As much as importing themes from the HP universe sounds fun, I don’t think anything as fanciful as that would work for us. If the system works, it’s unlikely that the splits we end up with would seem silly.
I think it’d be interesting to make a system that tries to cluster people into small, somewhat arbitrary groups of 8-10 people, under the expectation that everyone will get to know everyone else and nobody will slip between the cracks. A group any larger than that can have little in the way of intimacy.
I think a lot of this is a technological issue. Build the right system of sorting hats and virtual common rooms, and it’ll just happen. Until you do, it can’t happen.
Truly doing god’s work, insofar as we can infer what that would be.
I don’t think objections should be listed in the article. We have a much better medium for reading, browsing and forwarding living debates than a wiki. In the case of the second objection, I don’t believe that’s either a common objection or a compelling one. Superrationality does not mandate that we care about universes we can’t affect, which will never affect us. If you define “us” in such a way as to include our extrauniversal counterparts, you get something which resembles that, but by definition it is not what it resembles.
The link to A Kruel’s blog has 404ed. What did it say? My bets on “something superficially respectable under the transhumanist aesthetic but irredeemably incorrect” because I’ve never seen a Kruel post that wasn’t like that.
The interface design example is classic Rationalism Verses Empiricism. The empiricist consults nature, iterating and testing until they have something that observably works. Speed of work, not depth of thought, is what matters. The classical rationalist doesn’t bother, because they have so much faith in their models and deductive aparatus(in other words, a very very good imagination, mental or virtual) that they believe they can tell whether something will work just by looking hard at concepts while occupying the simulated mindsets of imaginary users. While I recognize that understanding user needs and perceptions from a distance is hard, I opine that the rationalist approach, on the right set of shoulders, is extremely valuable in interface design, there are local maxima you can’t get past with A/B testing and user interviews.
You need to be thoughtful as all hell to do something new without ruining it in ten different ways. IMO Infinite scrolling is a good example. The design community has collectively decided that the idea is fundamentally broken for all sorts of reasons because none of them seem to be thoughtful enough to sit down and answer to each of the criticisms and see that every single one of them can be patched, instead of just looking at what has been done and making generalizations from how they did.
You cannot create truly new things without depth of thought, without a focused, accurate enough imagination to find the flaws before building one too many failed attempts and giving up.
[1] citation pending. I’ll probably push out my black swan infinite scroll implementation at some point in the first or second quarter.
The allocation of land is, as far as I’m aware, a bit of an unsolved problem in schemes like this, and it is assuredly something you need to solve. Others have pointed out that voting with one’s feet is not necessarily going to put pressure on a government to change. In fact, with territory size kept constant, many of the people in positions of power might welcome emigration for the increase in land availability. The enforcing body of natural selection needs to take deliberate steps to ensure that dissatisfying nations actually go away eventually, and by sturgeons law, unless their borders are shrinking, pick any section of land at random, and there will be a 90% chance that the government tasked with optimizing its use for the maximization of human flourishing is shit.
Any solution to the allocation of land will have to deal with constantly shifting borders.
Neighboring governments will have to find ways to agree to border movement.
Anyone on the border ends up being faced with a choice between moving to a neighboring country or losing their home to it. Since the event of a border advancing over one’s home will usually go in lockstep with population decreases somewhere else in your nation, a government will often be able to set up some kind of exchange deal, though this will not be a complete solution. The land vacated by malcontented emmigrants/exiles is rarely going to be as valuable as the land being forbidden from a contented citizen who liked their place in the nation well enough to stay.
If you allocate land in proportion to the number of people in each nation, you lock in a certain way of life, precluding potentially valuable experimentation in the feasibility of lifestyles in dense populations or in the joy of lifestyles in sparse settlements. Maybe nations that optimize the joy of a few are, in total, preferable to nations with denser, merely satisfied people? Is it the place of the stewards of national selection to say? Maybe. I’d guess the LW community has probably thought about that moral question quite a lot, did we ever turn up an answer?
It does seem like it would be easiest to just allocate each nation total_habitable_land*(nation_population/total_population)*desired_proportion_of_natural_reserves. Though that does make overpopulation pretty much impossible to deal with. Reproduction booms becomes a problem for neighboring states, and the world at large, but leads to expansion for the states doing it. All the while no state has the authority to do anything about it.
others will simply spew vitriol your way
I’m going to have to disagree here, that’s much more helpful to a person than wordless shunning. Like, regardless of how unhelpful it can be, it is a factor of infinity more helpful and more humane than just downvoting. As Eliezer puts it, apathy is sometimes worse than hate. At least someone who hates you cares enough to do something.
I actually think the setup we’ve got here where you hemorrhage karma every time you engage a downvoted thread is a really obscenely terrible choice for a community of analyticals. A norm of rejecting arguments without feeling any need to explain yourself is much worse for us than a relatively weak time-sync(next to the average mobile game, comment trolls are nothing). In the least, the threshold (-5 karma) is too low.
Unless you’ve chosen a poor sample of the evidence you’re familiar with, your opinion is not going to stop anyone from following their fatuous curiosity, here. The historical cases you refer to seem a couple orders of magnitude more fraught with the spooks of subjective indignation than anything anyone in this community would propose. When an analytic philosopher looks at these things they don’t see decision procedures that should have worked in theory but failed, they don’t see decision procedures at all, they see disagreements in waiting.
I agree that any morally loaded criterion for deciding land reallocations is going to trip over the subjectivity of morality as we know it, especially in a system that’s explicitly designed to support the sovereignty of diverse groups. I believe we can at least come up with a negotiation procedure that returns immediate, unambiguous results that do a pretty okay job of cleaning up vacated territories.
I’ll call this one Simultaneous Haggle Reallocation.
Let’s say that in each term, each state must submit a preference ordering on the areas just outside their border, in neighboring states, and an ordering on the areas just inside their border. The outside list describes the places they’ll take if their population increases in proportion to their neighbors, the inside list is the places they’ll lose if their population decreases, all in order of their desire to hold them. The top elements of the inside list will be the areas the state most wants to keep. The top of the outside list will be the areas they most want to take. If there is a mutually agreeable way forward to be made, an area they’re happy to lose that their neighbor very much wants, or an area they wont part with for cultural reasons that their neighbor doesn’t share, that is the trade that will be made.
Kind of unfortunate though.. Above, I provided a formula that assumes an objective(or at least shared) measure of what constitutes habitable land, or, in a more sophisticated implementation; a measure of the value of the land per acre. The more the archipelago agrees on the relative value of land, the more often the states’ preference orderings will mirror each other. Much of the time, then, Simplistic Simultanious Haggling as I’ve defined it would just revert “reallocate at the borders at random(possibly with smoothing) since there’s clearly no mutually agreeable way to settle this”.
It would be fun to run some simulations of this and see what kind of games emerge.
Where does it presuppose that?
Please do! I’d definitely go.