Just this guy, you know?
Dagon
I often go the other way when discussing this topic—humans are as natural as anything else. Parking lots are natural things, arranged by natural animals (humans). Butylated Hydroxytoluene is absolutely natural—there’s no way to make the underlying atoms without nature, and the arrangement of them follows every natural law.
Everything real is natural—nature is simply “what is”.
Of course, I like this because I recognize it’s a discussion about words, with arbitrary meanings that each of us gets to use however we want, and I enjoy pointing that out more than I enjoy trying to get people to conform to my preferred definitions.
A lot depends on the specifics of the scenario (for both AI and human-upload cases). I don’t know anyone who thinks that there’s anything important (for survival) that humans do which can’t theoretically be done by an electro-mechanical device.
So in theory, upload/AGI would be about as self-sustaining as biological entities (which is: rather fragile, and don’t have enough track record at scales that stress the ecosystem to know whether we are).
Presumably, the robots are a little more rational than humans in terms of how they maintain and replenish their resources, and how they ration themselves (aka: each other) to stay within bounds of current sustainability. So, even more unknown, but plausibly more sustainable than biological humans.
Wow. We have extremely different beliefs on this front. IMO, almost nothing is retroactively funded, and even high-status prizes are a tiny percentage of anything.
Any workplace will tell you that of course they want to reward good work even after the fact,
No workplace that I know of DOES retroactively reward good work if the employee is no longer employed there. Most of the rhetoric about it is just signaling, in pursuit of retention and future productivity.
people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future
It seems likely that they’ll accept evolution, and still not feel constrained by it, and pursue more directed and efficient anti-entropy measures. It also seems extremely likely that they’ll use a decision theory that actually works to place them in the universe they want, which may or may not include compassion for imaginary or past or otherwise causally-unreachable things.
(edit to clarify the decision-theory comment)
saying that people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future.
It’s not clear that any likely decision theory, let alone a non-wrong one, requires fully-acausal beliefs or actions. Many of them do include a more complete causality diagram than CDT does, and many acknowledge the the point-of-decision is often much different than the apparent one. But they’re all basically consequentialist, in that they believe that actions can influence future states of the universe.
there’s a lot of fully-unknown possibilities. For me, I generalize most “today’s fundamentals don’t apply” scenarios into “my current actions won’t have predictable/optimizable impact beyond the discontinuity”, so I don’t think about specifics or optimization within them—I do think a bit about how to make them less likely or more tolerable, but I can’t really quantify.
Which leaves the cases where the fundamentals DO apply, and is ONLY short- and medium-term optimizations. Nothing I plan for in 100 years is going to happen, so I want to take care of myself and my family in coming decades in cases where things don’t collapse or go too weird. current income > expenses, and reasonable investment strategy (10- and 30-year target date funds, unless you know better, which you don’t) cover the most probability-weight for the next few decades, contingent on any current systems continuing that long.
Your suggestion of social capital (not sure political capital is all that durable, but maybe?) is a very good one as well—having friends, especially friends in different situations (another country, perhaps) is extremely good. First, it’s fun and rewarding immediately. Second, it’s a source of illegible support if things go crazy, but not extinction-crazy.
Hmm. I’m not sure what you’re disgusted by. If the complaint is that humans (all/most?) aren’t pure, and have motives that seem gross and base, you’re probably right, but also probably not well-served by being disgusted—finding beauty in imperfection or just curiosity about what makes us tick might be better. That said, I’m not one to yuck your … yuck.
If the complaint is that utility theory is itself gross because it allows it, I don’t agree. It’s still a useful model, just that reality clashes with your preferences.
Why are you adding the word “primitive” to your descriptions? Utility maximization should encompass ALL desires and drives, including the beautiful and holy. You can even include altruism—if you seek other’s satisfaction (or at least expressions thereof—you don’t have direct access to their experiences), that’s perfectly valid.
Nice that some newer devices make it even easier (fall detection on smartwatches, for instance). Remember, it’s actually pretty easy, though, if you just put it to music: https://www.youtube.com/watch?v=HWc3WY3fuZU
I’d expect that most specifics about those topics, and their relative priority to other things you’re currently seeking and making tradeoffs against, have changed and will change significantly.
The amount of generalization that would make a value unchanging also makes it useless for prediction or decision-making.
Having worked on large-scale non-safety-critical (think massive enterprise and infrastructure-support systems at large cloud providers) for a long time, one of the biggest lessons is the shape of the cost-to-reliability curve.
after about 3 9s, each increment of an -ity (availability, data durability, security, etc.) is far more expensive than the improvement (which is already exponential). This cost is not just financial, it’s a cost in features (don’t add stuff that’s not simple enough to prove correct), in agility (can’t add things quickly, everything requires more specification and implementation proof than you think), and in operations (have to watch it more closely, react to non-harmful anomalies, etc.).
I suspect Moloch will prevent any serious slowdown-for-safety desires. Anyone truly serious about being safe will get outcompeted and be made irrelevant. To that analogy, once the knowledge existed to create the bomb, it was inevitable that SOMEONE would risk igniting the atmosphere, so it probably should be us, now, rather than delaying 5-10 years so it can be Russia (or now, China).
Hmm. I wonder what it’d take to create a no-ui, API-only, read-only mirror of LW data. For most uses, a few minute delay would cause no harm, and it could be scaled for this use independently of the rest of the site. If significant, it could be subscription-only—require auth and rate-limited based on a monthly fee (small, one hopes, to pay for the storage, bandwidth, and api compute).
I would need a first-sync (and resync/anti-entropy) mechanism, but could just poll the allRecentComments to stay mostly up-to-date, and turn this into a single-caller to the LW systems, rather than multiple.
Some values don’t change. Citation needed. I can’t think of anything that could be reasonably classified as a “value” that is unchanging in humans. And I don’t know of any other entities to which “values” can yet be applied.
I’m a goal seeking system. Even less clear. Actually, I don’t know you, so maybe it’s true for you. It’s absolutely not true for me. I’m an illegible, variable, meaning-seeking (along with other, less socially-acceptible-to-admit -seeking) thing.
In real-world entities, the model of terminal values and goal-seeking is highly suspect.
No. If you replace “just” with “partially model-able as”, then yes.
There’s lots of things we could do, but don’t. Generally, the risk/cost is non-zero, even if small, and the recognizable value (that which can be captured or benefit to the decision-maker) is less than that.
I’d probably pay a little bit to see this in the skies while I’m safely on the ground, and even to be in one after the first 10,000 have gone by. But I wouldn’t pay enough to make up for the lawsuits and loss of revenue from people who don’t like the idea.
reasons I downvoted:
“addicted” is a sensationalist headline, and is not addressed anywhere in the post. Just misleading.
“gambling” is a value-laden word, and you’re switching between “risk-contingent-behavior” and “risk-seeking-for-it’s-own-sake”.
no clear thesis for me to engage with, either to update my model or to disagree with. Just a bunch of words that may be technically correct, but don’t add up to anything.
Fun exploration, though I don’t believe the underlying assumptions at all. The biggest disconnect I see is the belief that the current mean individual wealth can be made to retain that fraction of total wealth over any significant time period, including massive changes in number of wealth-holders, and in what “wealth” can even be measured in.
There is no long-term passive wealth mechanism. It always requires quite a bit of attention and management, and then gets transferred to the managers rather than the nominal owners. Or, often, to the revolutionaries or vendors who are able to capture it.
This is problematic, EVEN IF the concept of “ownership” can be applied to galaxies and human-comprehensible owning entitites.
I don’t think markets are likely to correlate very strongly to this. Whether prediction markets or stock/commodity that has a bit of correlation to what you care about, the fundemental problem is “if the economy changes by enough, the units of measure for the market (money!) change”. Which means that payoff risk overwhelms prediction risk. You can be spot-on correct about timelines, and STILL not get paid. So why participated in that prediction at al?
Yup, things that are mostly out of your control and aren’t truly immediate are easy to forget about in your daily life. I’d argue that this compartmentalization is actually a really useful skill, to prevent worry and depression that doesn’t lead to actions which improve your life satisfaction.
The ability/habit of thinking about distant/big topics in a time boxed way, considering what, if anything, to do about them, and then going back to your normal activities is, for most people, a very effective strategy.
Interesting take, but I’m having trouble accepting it, as I don’t think “reality”, “mathematics”, and “theorem” as used here are the common definitions. If you don’t like the results of a theorem, yes, examine the axioms, and yes, identify where you’re misinterpreting the results. But you still have to believe the underlying syllogism “if X and Y, then Z” that the theorem proves. You can only notice that Z is suspicious, so you need to be really sure about X and Y.
I mostly agree with your resistance steps, but recognize that this isn’t resisting the math, it’s resisting humans who are trying to bamboozle you by incorrectly presenting the math.
This doesn’t solve the problem of motivation to lie about (or change) one’s utility function to move the group equilibrium, does it?
You’re speculating on a topic on which we have no way to collect evidence. We can’t measure qualia or experience—we have only self-reported information about identity, and none of it includes copying.
It’s unclear whether you think there is an instantaneous experience, or if all experience is over time (reminder: we have no measurements that would provide evidence here). It seems obvious to me that R1 and R2 at that point in time share all memories and experiences-in-progress up to and including the point of copy. And for some number of milliseconds afterward, there won’t be time for new inputs or environmental changes to have any impact, so they remain identical.
Of course, they begin to diverge as the different environments come into play. However, they diverge about as much from each other as each does from their shared past. You’re not the same person you were 10 minutes ago, and they’re not the same person as each other.