Just this guy, you know?
Dagon
In most formulations I’ve seen the word “you” is used, implying that the reader is the agent being predicted. I’ve never seen them limited to trivial agents that a human-level intelligence can predict.
It’s an open question whether it’s possible for Omega to exist, who can predict a truly complex intelligence, which is itself modeling the universe including itself.
Pascal’s mugging never was very compelling to me (unlike counterfactual weird-causality problems, which are interesting). It seems so trivial to assign a probability of follow-through that scales downward with the offered payoff. If I expect the mugger to have a chance of paying that’s less than 1/n, for any n they name, I don’t take the bet. If I bothered, I’d probably figure out where the logarithm goes that makes it not quite linear (it starts pretty small, and goes down to a limit of 0, staying well below 1/n at all times), but it’s not worth the bother.
Point 2 is very related to point 3. Nuclear plants have catastrophic failure modes. They’re not likely, and are mitigable to a great degree, but if something goes wrong ENOUGH, it’s very very bad. Which leads to a reasonable preference that they be built away from where I want to be. And a reasonable perception that if they build one near you, it’s because you’re low-status and expendable (or at least don’t have the power to get the project moved away).
For normal use, chess clocks (time per game, allocated however the player likes) are probably the best answer. In poker, it’s unofficial, but common for players to get antsy if someone’s taking a long time, and for players to call “time, please” verbally when they’re facing a thoughtful decicision. Very occasionally, the floor personnel will actually put a clock on the player and give them 2 minutes or they fold.
For this excercise, part of the purpose is teaching the players what it’s like to have shorter or longer periods for thinking, outside of their control. This develops the habits of acting when it’s obvious and thinking/planning when necessary. So the lack of control is intentional.
I’d be a bad god. I’d probably encode some mix of kindness and responsibility, and likely a more static enjoyment of what is than a striving for change. I presume they’d never get out of hunter/gatherer mode.
And now I’m wondering exactly what my limits are. I don’t think “a theory of morality” is something that stands alone in people. I translated that in my mind to a set of personality traits and behaviors that would automatically be enforced somehow and not evolve over time (in individuals or across generations). But if you mean more constrained cognitive moral theories that most of ’em don’t actually follow very well, I’m not sure what I’d choose.Note that none of this applies to real humans, nor perhaps any agents in this universe.
Well, the Shapely value still meets your criteria, since they’re only removal of constraints, not addition of new ones. If you don’t care about linearity/additivity, what DO you care about that the Shapely calculations don’t include?
Separately, can you explain why you don’t care about linearity? Where do the unaccounted-for differences come from when two sub-games fail to add up to the total game?
Over the course of the universe, the best decision theory is a consensus/multiple-evaluation theory. Evaluate which part of the universe you’re in, and what is the likelihood that you’re in a causally-unusaual scenario, and use the DT which gives the best outcome.
How a predictor works when your meta-DT gives different answers based on whether you’ve been predicted, I don’t know. Like a lot of adversarial(-ish) situation, the side with the most predictive power wins.
C,C is second-best, you prefer D,C and Nash says D,D is all you should expect. C,C is definitely better than C,D or D,D, so in the special case of symmetrical decisions, it’s winning. It bugs me as much as you that this part gets glossed over so often.
Counterfactual Mugging is a win to pay off, in a universe where that sort of thing happens. You really do want to be correctly predicted to pay off, and enjoy the $10K in those cases where the coin goes your way.
I suspect you’re reacting to the actual beliefs (disbelief in your example), rather than the word usage. In common parlance, “skeptical” means “assign low probability”, and that usage is completely normal and understandable.
The ability to dismiss expertise you don’t like is built into humans, not a feature of the word “skeptical”. You could easily replace “I am skeptical” with “I don’t believe” or “I don’t think it’s likely” or just “it’s not really true”.
This is an important point. Containers (VMs, governance panels, other methods of limiting effect on “the world”) are very different from simulations (where the perception IS “the world”).
It’s very hard to imagine a training method or utility-function-generator which results in agents that care more about hypothetical outside-of-perceivable-reality than about the feedback loops which created them. You can imagine agents with this kind of utility function (care about the “outer” reality only, without actual knowledge or evidence that it exists or how many layers there are), but they’re probably hopelessly incoherent.
Conditional utility may be sane—“maximize paperclips in the outermost reality I can perceive and/or influence” is sensible, but doesn’t answer the question of how much to create paperclips now, vs creating paperclip-friendly conditions over a long time period vs looking to discover outer realities and influence them to prefer more paperclips.
I don’t understand why this example gives different answers. There’s no causality difference, only a knowledge difference. I think we’d need some numbers about Paul’s pre-decision estimate that he’s a psycopath, and the probability that someone who decides to press the button is a psycopath. That is, his prior and posterior beliefs about his own psychopathy.
I’m not much of a CDT apologist—it seems obviously wrong in so many ways. But I’m surprised anew that CDT conflicts with conservation of expected evidence (if some piece of data is expected, it’s already part of your prior and shouldn’t cause an update).
Famously, the two hardest problems in computer science are cache invalidation and picking names for things.
I’m curious what’s actually doing the caching here. Most modern servers and CDNs are fairly sophisticated about what components of the URL go into the cache keys, and know that tracking IDs should be ignored.
For a lot of posts, the value is pretty evenly distributed among the post and the comments. For frontpage-worthy ones, it’s probably weighted more to posts, granted. I fully agree that “reign of terror” is not sufficient reason to keep something off frontpage.
I was reacting more to the very detailed rules that don’t (to me) match my intuitions of good commenting on LW, and the declaration of perma-bans with fairly small provocation. A lot will depend on implementation—how many comments lc allows, and how many commenters get banned.
Mostly, I really hope LW doesn’t become a publishing medium rather than a discussion space.
I have no clue whether any of my previous comments on your posts will qualify me for perma-ban, but if so, please do so now, to save the trouble of future annoyance since I have no intention of changing anything. I am generally respectful, but I don’t expect to fully understand these rules, let alone follow them.
I have no authority over this, but I’d hope the mods choose not to frontpage anything that has a particularly odd and restrictive comment policy, or a surprisingly-large ban list.
I’m not sure I understand the question. “normal” succession is succession, not branching and independent experience. So our intuitions about identity are applicable, and the cessation of the succession is death.
With branching, it depends on what it is that defines “mortality” to you. If you die, but another you lives, does that count? I say that each other you is a different agent, so that’s not immortality. I also don’t think cloning or in-universe brain copies are simple immortality, because they’re different people (even if they have the same history and some of the same memories).
“subjective” is itself subjective. There are entities experiencing all the things. If you believe that other beings can have qualia, these virtual-copies-of-you have qualia. Whether they’re the “same” entity is dependent on your ideas about identity.
I don’t know anyone who claims that it’ll be a linear or unified experience. Without continuity and communication across instances, I don’t think of it as personal immortality in the simple sense, any more than I think about children or great works as immortality.
Woody Allen had it right:“I don’t want to achieve immortality through my work; I want to achieve immortality through not dying. I don’t want to live on in the hearts of my countrymen; I want to live on in my apartment.”
Uncaring and harmful life extension, yes. Not actually adversarial, where the purpose is the suffering. Still, horrific, even if I don’t take it as evidence that shifts my likelihood of AGI torture scenarios.
I don’t actually know the stats, but this also seems less common than it used to be. When I was younger, I knew a few old people in care facilities that I couldn’t imagine would be their choice, but in the last 10 years, I’ve had more relatives and acquaintances die, and very few of them kept beyond what I’d expect is their preference. I’ve participated in a few very direct discussions, and in all cases, the expressed wishes were honored (once after a bit of heated debate, including the sentiment “I’m not sure I can allow that”, but in the end it was allowed).
Well, don’t rid yourself of beliefs that actually work. You’re arguing against having beliefs for the wrong reasons, and against having beliefs that won’t update when circumstances or knowledge changes. I fully agree with both of those motivations, but the advice should be find better grounding for your beliefs”, rather than “unconditionally dispose of those beliefs.”
It’s possible that Bitcoin will be the only survivor of the massive coin and scam expansion that’s inevitable as a new thing becomes popular. The root belief that starting new coins hurts everyone except the founders/scammers who get in early could be correct. And even people who believe this for the wrong reasons (they’ve heard it so often that they repeat it without thinking through the mechanisms) are correct in their behavior. Those people can improve their outcomes with some research and modeling that adds nuance and exceptions to that creed, but not by throwing it out entirely and putting lots of money into every new hotness that comes along.
I give those scenarios a much smaller probability weight than simple extinction or irrelevance, small enough that the expected utility contribution, while negative, is not much of a factor compared to the big 0 of extinction (IMO, biggest probability weight) and the big positive of galactic flourishing (low probability, but not as low as the torture/hell outcomes).
The reverse-causality scenarios (CF Roko’s Basilisk) rely on some pretty tenuous commitment and repeatability assumptions—I think it’s VERY unlikely that resurrection for torture purposes is worth the resources to any goal-driven agent.
Even today, there are very few cases of an individual truly unambiguously wanting death and not being able to achieve it (perhaps not as trivially, quickly, or free-of-consequence-for-survivors as hoped, but not fully prevented). It’s pretty much horror- or action-movie territory where adversarial life extension happens.
Though I should be clear—I would call it positive if today’s environment were steady-state. If you feel otherwise, you may have different evaluations of possible outcomes (including whether “flourishing” that includes struggle and individual unhappiness is net positive).
This is a great question. The problem with passive financial investing (where you direct money, but are not involved in any operational decisions) is that it’s inherently a bet FOR civilizational adequacy—it only takes a partial collapse (or corruption or not-collapse-but-stupid-equilibrium-change) to keep you from getting paid.
I know nothing about the coal business, but I can imagine they’re already so leveraged that a slight drop in prices will kill them. I can imagine regulatory risks that shut them down (or nationalize their profits by environmental or tax mechanisms). I can imagine that many of the public companies are operators rather than owners of the underlying mineral rights, and most of the long-term value is locked up in private companies and trusts.
It’s also possible that this is a market failure, and ESG and other irrational capital allocation has made this a great opportunity. I don’t know how I’d get to the truth of it (or more formally, how I’d update from my EMH-on-average-but-highly-variant-per-company prior for this industry), without a LOT of effort and some amount of insider knowledge.