Your argumetn is sound. For me, it’s curious that development economists almost never mention the temperature x productivity relation—except for J. Sachs (who mixes it with other geographical factors) and Nordhaus (who got a Nobel Prize for reasoning about it).
Ramiro P.
My opinion (“epistemic status”): dunno.
I remember an issue in The Economist in 2013 about it. There’s some argument among economists on the absence of productivity improvements, despite the buzz over AI and ICT; Erin Brynjolfsson argues that it takes some time for global pervasive technologies to have an impact (e.g.: electricity). However, the main point of Thiel & Weinstein is that we haven’t found new breakthroughs that are easy to profit from.
But it reminds me Cixin Liu’s Dark Forest context, where:
humankind stalled because Physics breakthroughs were prevented by the Sophon Barrier—even so, they built a utopian society thanks to cheap energy from fusion power.
I am wondering about the link between the notion of distance (in the first post), extremes in a utility scale, and big deal. That’s me in 15′
I thought he was being ambiguous on purpose, so as to maximize donations.
So far, LW is still online. It means:
a) either nobody used their launch codes, and you can trust 125 nice & smart individuals not to take unilateralist action—so we can avoid armageddon if we just have coordinated communities with the right people;
b) nobody used their launch codes, because these 125 are very like-minded people (selection bias), there’s no immediate incentive to blow it up (except for some offers about counterfactual donations), but some incentive to avoid it (honor!… hope? Prove EDT, UDT...?). It doesn’t model the problem of MAD, and surely it doesn’t model Petrov’s dilemma—he went against express orders to minimize the chance of nuclear war, so risking his career (and possibly his life).
c) or this a hoax. That’s what I would do; I wouldn’t risk a day of LW just to prove our honor (sorry, I grew up in a tough neighborhood and have problems with trusting others).
My point is: I think (b) and (c) are way more likely than (a), so I’d use the launch codes, and take the risk of ostracism, if I had them. I think it would yield higher expected-utility; as I said, I wouldn’t risk a day of LW to prove our honor, but I’d do it to prove you shouldn’t play Petrov lightly.
Please, correct me if I’m wrong.
P.S.: (d) this allows anyone to claim having launch codes and mugger others into counterfactual donations—which is brilliant.
I find LeCun’s insistence on the analogy with legal systems particularly interesting, because they remind me more Russell’s proposal of “uncertain objectives” than the “maximize objective function” paradigm. At least in liberal societies, we don’t have a definite set of principles and values that people would agree to follow—instead, we aim at principles that guarantee an environment where any reasonable person can reasonably optimize for something like their own comprehensive doctrine.
However, the remarkable disanalogy is that, even if social practices change and clever agents adapt faster than law can evolve (as Goodhart remarks), the difference is not so great as with the technological pace.
I ’ve seen somethings concerning how dirty cellphones are—and how they can worsen interpersonal disease transmission. I wonder if there’s any advice on how to keep it clean (and how useful would it be).
We should take into account the welfare of others, too. Besides protecting me from disease, washing my hands prevents me from transmitting it to someone else. It’s pretty much analogous to vaccines.
If you grab your mobile with your dirty hands, then wash them, and then use your device again, you just recontaminated them; and if you never clean its surface (how do we do it effectively?), it’ll accumulate pathogens. This seems to be a serious problem in hospitals.
(I’m not sure if I follow your reasoning; it apparently implies that, if you never shake hands with someone else, you never have to worry about washing them. Of course, it does reduce the potential for transmission.)
Does anyone have any idea / info on what proportion of the infected cases are getting Covid19 inside hospitals? This seems to have been a real issue for previous coronavirus.
I’d say there might be a stark difference between countries / regions in this area. Italian health workers seem to have taken a heavy blow. Also, 79 deaths in Brazil (total: 200) came from only one Hospital chain/ health insurer, which focus on aging customers (so, yeah, maybe it’s just selection bias?).
(Epistemic status: low, but I didin’t find any research on that after 30min, so maybe the hypothesis deserves a bit more of attention?)
I’ve seen news about this study, but no preprint. It’d be really helpful if we could get it.
Kialo is totally underrated.
Strict liability in tort law seems to be a pretty obvious example, doesn’t it? I mean, I guess a lot of corporate law can be seen as “vicariously holding a group accountable”
LW is quoted (in a kind of flattering way) in Simon Dedeo’s awesome piece on the last Nautil.us issue. For spoiler lovers:
[...] Rationality is my ticket out. The only reason I can trust you is that you seem rational enough to talk to. But now you’re telling me that rationality is just a layer on top of the System—it’s just as irrational as the people I’m trying to escape. I don’t know which is worse: being duped by someone else’s priors, or being a biological machine.
Teacher: Don’t go too far. You’re a smart kid—you can iterate faster than most. You can match patterns better. Evolution set you up well. You’ll get better at predicting the consequences of your actions, and better at adapting your environment to your will. Rationality is systematized winning.
Ian: It’s not winning I’m worried about. It’s my mind. Maybe it’s silly, maybe it’s a fetish, but I want to know the truth. It’s the principle of the thing. Wanting to know the truth got me this far, but now the only option you’ve given me is believing in something I can’t see. If I know it at all, it can’t be through rational, scientific calculation. There’s some kind of extra-rational process I have to engage in—but what’s beyond the edge of reason?
Teacher: Many things. Dreams, intuition, transcendence, love, ascending the ladder, repetition and the leap of faith, philosophy itself …
Ian: … delusion, fairy tales, fascism!
Teacher: Childhood’s end.
My concern, which I interpret as being TAG’s point (but with different words), is that your example of water vs. XYZ is immediately traceable (at least for anyone who knows the philosophical discussion) to Putnam’s Twin Earth experiment. The way you express your point suggests you disregard this thought experiment—which is surprising for someone acquainted with it, because Putnam (at least when he wrote the paper) would likely agree that a substance with the same basic chemical properties of water would be water. He actually aims to provide an argument for semantic externalism—i.e., the idea that the meaning of “water” (or of other natural species) is H20, its chemical nature, and not the apparent properties commonly used as criteria to discriminate it (that it’s a tasteless liquid...). He’s so pushing against a conventionalist view about semantics (and philosophy of language), thus it’s not about physics or ontology.
A tentative dialogue with a Friendly-boxed-super-AGI on brain uploads
Possibly. I said this AGI is “safer and more aligned”, implying that it is a matter of degree – while I think most people would regard these properties as discrete: either you are aligned or unaligned. But then I can just replace it with “more likely to be regarded as safe, friendly and aligned”, and the argument remains the same. Moreover, my standard of comparison was Celest-IA, who convinces people to do brain uploading by creating a “race to the bottom” scenario (i.e., as more and more people go to the simulation, human extinction becomes more likely – until there’s nobody to object turning the Solar System into computronium), and adapts their simulated minds so they enjoy being ponnies; my AGI is way “weaker” than that.
I still think it’s not inappropriate to call my AGI “Friendly”, since its goals are defined by a consistent social welfare function; and it’s at least tempting to call it “safe”, as it is law-abiding and does obey explicit commands. Plus, it is strictly maximizing the utility of the agents it is interacting with according to their own utility function, inferred from their brain simulation—i.e., it doesn’t even require general interpersonal comparison of utility. I admit I did add a tip of a perverse sense of humor (e.g., the story of the neighbors), but that’s pretty much irrelevant for the overall argument.
But I guess arguing over semantics is beyond the point, right? I was targeting people who think one can “solve alignment” without “solving value”. Thus, I concede that, after reading the story, you and I can agree that the AGI is not aligned—and so could B, in hindsight; but it’s not clear to me how this AGI could have been aligned in the first place. I believe the interesting discussion to have here is why it ends up displaying unaligned behaviour.
I suspect the problem is that B has (temporally and modally) inconsistent preferences, such that, after the brain upload, the AI can consistently disregard the desire of original-B-in-the-present (even though it still obeys original-B’s explicit commands), because they conflict with simulated-B’s preferences (which weigh more, since sim-B can produce more utility with less resources) and past-B’s preferences (who freely opted for brain upload). As I mentioned above, one way to deflect my critique is to bite the bullet: like a friend of mine replied, one can just say that they would not want to survive in the real world after a brain upload – they can consistently say that it’d be a waste of resources. Another way to avoid the specific scenario in my story would be by avoiding brain simulation, or by not regarding simulations as equivalent to oneself, or, finally, by somehow becoming robust against evidential blackmail.
I don’t think that is an original point, and I now see I was sort of inspired by things I read from debates on coherent extrapolated volition long ago. But I think people still underestimate the idea that value is a hard problem: no one has a complete and consistent system of preferences and beliefs (except the Stoic Sages, who “are more rare than the Phoenix”), and it’s hard to see how we could extrapolate from the way we usually cope with that (e.g., through social norms and satisficing behavior) to AI alignment—as superintelligences can do way worse than Dutch books.
Thanks, I believe you are right. I really regret how much time and resources are wasted arguing over the extension / reference of a word.
I’d like to remark though that I was just trying to explain what I see as problematic in FiO. I wouldn’t only say that its conclusion is suboptimal (and I believe it is bad, and many people would agree); I also think that, given what Celestia can do, they got lucky (though lucky is not the adequate word when it comes to narratives) it didn’t end up in worse ways.
As I point out in a reply to shminux, I think it’s hard to see how an AI can maximize B’s preferences in an aligned way if B’s preferences and beliefs are inconsistent (temporally or modally). If B actually regards sim-B as another self, then its sacrifice is required; I believe that people who bite this bullet will tend to agree that FiO ends in a good way, even though they dislike “the process”.
Well, I’m kinda sure climbers wouldn’t like to brain-upload to a utopia simulation, so maybe there’s some connection between this and AI alignment.
(Curiously, just read today Will MacAskill’s WWOTF mentioning he used to climb buildings in Glasgow in his teens, until he was almost cut open by glass...)
There’s a cool name for this donor’s action: blindspotting (yeah, it’s written like this) - after a Roy Sorensen book from 1988.