OP is worried about some comments in particular, maybe for that specific subset it’s warranted. But yes, it could become a nuisance if too common
FlorianH
Intriguing conjecture; sounds partly plausible
A) Could it not nevertheless be that we have legal personhood limited to those incumbent legal persons officially “owning”/representing the digital minds?
B) One nuance: Reading “legal personhood” I interpret it in two ways:
The way I read you most explicitly mean: right to have contracts enforced etc. Yes we might naturally want to extend (well, depends on A) )
Right we attribute to digital minds essentially because we’d see them/their state of mind as intrinsically valuable. Here, I’d think this makes sense iif we put enough probability onto them being sentient.
Some sympathy with the idea, but on net, reluctant to ask for additional voting complexity given the two leavers we already have.
Maybe a terse prelude, “Mind: throwaway short-form fwiw; don’t read if you like only polished things” (or whatever smarter thing one can put in such type of vein), could help a bit to manage expectations and limit downvoting just because one rightly chipped in a quick thought indeed?
In geopolitics just as well as in all types of world modelling: A ton of things are trivial to reliably ‘model’ or infer (even for an uneducated dummy), and a ton are hopelessly too complex and erratic to plausibly say anything intelligible about even with the best ‘modelling’ or knowledge/intuition. Then there’s a large mass of questions in between these two extremes where better knowledge or epistemic approaches/modelling may help you to moderately or maybe sometimes even importantly improve your predictions.
One can debate how large that subpart of the in between section is where significant extra sophistication is currently really helpful. But that geopolitical predictions per se would be impossible, feels almost trivially wrong.
Interesting how you introduce a sort of ‘let’s not just be about semantics’ while in the end, the disagreement boils down to essentially exactly that.
I think you’re completely right with what you point out, but I think this is not about having to convince SH about the ‘existence of free will’, rather about what terminology to best use in which discussion with whom.
I remain highly sympathetic to SH’s framing, as
0. SH is simply always right on everything. Ok, small joke to start (though—gosh—isn’t he kind of so so amazingly right in most things? My personal opinion; still always surprising me, though I appreciate quite some smart people seem to not like him).
Retributive justice—as in really within jurisprudence or so—questions are essentialy when we discuss free will, and here’s where most people are stupidly confused while SH’s exposition and interpretation is spot on about it—as I have the impression you mostly agree
In our daily usual interactions the same applies even much more broadly. I instinctively hate you if you do something bad and I think you’re somehow evil in a way that’s beyond ’that unfortunate creature is just suffering the tumor—or, as you put it, it’s tumor all the way down to even that more usual seeming creature still! - and this is absolutely impossible to see and rmbr for 95% of people or so → It’s exactly SH’s framing that’s an ideal summary as to why we’re wholly wrong in our instincts.
With point 2. said, I do agree that emphasizing the nuances you point out—and which whom I’m convinced SH rather fully agrees—might, for quite some people, make the whole free-will-not-in-the-way-you-instinctively-mean less of a non-starter, and thus be a fruitful addition to the discourse. What I dislike though is some of the nuances in your framing/wording, that makes it initially sound as if you’d try to rebut more than you actually, substantively do.
Challenge accepted, thanks—and I think easily surmounted:
Your Fakeness argument—I’ll call it “Sheer Size Argument” makes about as much sense as for a house cat seeing only the few m^3 around it, to claim the world cannot be the size of Earth—not to speak of the galaxy.
Who knows!
Or to make the hopefully obvious point more explicitly: Given we are so utterly clueless as to why ANY THING is at all instead of NOTHING, how would you have any claim to ex ante know about how large the THING that is has to be? It feels natural to claim what you claim, but it doesn’t stand the test at all. Realize, you don’t have any informed prior about potential actual size of universe beyond what you observe, unless your observation directly suggested a sort of ‘closure’ that would make simplifying sense of observations in a sort of Occam Razor way. But the latter doesn’t seem to exist; at least people suggesting Many Worlds suggest it’s rather simpler to make sense of observations if you presume Many Worlds—judging from ongoing discussions that later claim in turn seems to be up for debate, but what’s clear: The Sheer Size Argument is rather mute in actual thinking about what the structure of the universe may or may not be.
Mind: Brain Replacement isn’t Brain Augmentation.
History, and much of the 96% of non-human work as you call it—and however you define that exact number—were mainly all sorts of types of brain augmentation, i.e. brain extension beyond our arms and mouths, using horses, ploughs, speakers, and all types of machines, worksheets, what have you.
AI, advanced AI, incontrast, is more and more sidelining that hitherto essential piece or monopolist, alias human brain.
And so, whatever the past, there is a structural break happening right now.
And so, you and the many others who ignore that one simple phrase I suggest remembering: Brain Replacement isn’t Brain Augmentation—risk to wake up baffled in not so long a future. This at least would seem to be the very natural course of things to expect absent doom. Then, indeed the future is weird and who knows anyway. Maybe it’s so weird that one way or other you’ll still be right—I just really wouldn’t bet on it they way you seem to argue for.
Interesting thought.
I don’t think it goes too too far in practice, still.
Three spontaneous complications, the first to me intuitively most relvant though idk how general it is—in the end for me there’s not much left of the original idea even if it’s a nice one; mind is a freaking complex machine, and friendship to me a hyperdimensional concoction of that machine, evading such nice trivialization despite the original appeal:
Kind stranger > Beloved friend? W/o thinking too much, my feeling is I’d potentially risk quit a bit more for a stranger than for some friends, if I think the stranger will have a positive impact on the world. Reminds me of how I’d be more incline to put a charitable organization than my friends—and I think my friends would even approve of that idea (I’m speculating, maybe they’d even be friends of me because they think I’m such a person—assuming I’d be such a person)
FWIW, on the other hand, I recently realized that—at least hypothetically it seemed to me—that some close friends could do absolutely terrible stuff and I’d still feel just as close to them. Not sure everyone has that intuition, and I’d not be surprised if my brain would sneakily de-friend me of them in case they lost status or so, but my intuition is such.
Weakness isn’t Fakeness!? Assume: Maybe I really love my friends very much, or my romantic love, but I’m really really weak. I love hanging out with them, I care in many ways really really a lot about them, I go out of my way to support them if I see need—but I just could not be put to do an actual explicit life-risking decision even with a low 1% or so probability. You may of course then say I’d be “an untrue friend” in that case, so it all boils down to definition; for my understanding of friendship, I find I could still be a genuine true friend even if I’m weak in that way, and I could easily imagine still loving my friends just as much if they told me “look sorry, btw, if ever you got kidnapped, I might be willing to spend my money to free you, but if I have to play Russian Roulette even with 1%, I fear I could not do it”.
State vs. Real/Hypothetical vs. Actual: Finally, this entire thing may in more real-world-ish circumstances a lot on subtleties, where slight nuances psychologically would pull us towards doing or not doing it, w/o there being a meaningful difference between situations, and, among others, the result might change a lot in hypothetical vs. in real. So asking myself how I’d react, simply might not get how I’d really act. Keyword “Stated vs. Real preferences”. So while conceptually there’s something interesting here, not sure sheer introspection in hypothetical situation does reveal the preferences well.
Not sure about the exact context of what you write, but fwiw:
Intuitively agree partly that “many can absorb massive taxes w/o passing through to customers” could be justified on many individual cases: with low marginal cost production tech, your optimal sales price is roughly independent of the tax rate you pay: profit maximization becomes revenue maximization, which is entirely dependent on the demand curve only, and thus your sales price & qty sold won’t budge because of e.g. a VAT like tax rate increase.
Similarly, if tax is a profit tax, absence of price-increase effect can be even more easily expected
On the other hand, if you increase taxes, even if the above is strictly speaking true, it’s not true for all types of actors at all, and, maybe most importantly:
Note also, higher taxes, even for workers, may much simply “prevent” economic activity (which basic economic models imho wrongly tend to focus on), but instead makes actors do three costly and bad things: (i) illegally hide (black markets), (ii) tweak activity (for real or in the books) to adjacent but less taxed ones, (iii) evade by relocating activity to jurisdictions with less taxes (physically or digitally, again genuine or gray-area or more illegal-but-difficult-to-catch ones). Taken together these are powerful forces, esp. if unhinged intl tax competition has many jurisdictions aggressively trying to attract any tax substrate from anywhere.
Thanks, the suggestion sounds interesting. However; first quick update fwiw: I’ve only had the chance to read the first small section, “A Brief Proof at You Are Every Conscious Thing”, and I must say to me it seems totally clear he’s essentially making the same Bayesian mistake—or sort of Anthropic reasoning mistake—that OP contains. It’s totally not making sense the way he puts it, and I’m slightly surprised he published it like that.
I plan to read more and to provide my view on the rest of his argument—hopefully I’ll not fail despite time pressure.
You mention a few; fwiw some additional things that occasionally increase my empathy to whom I consider of lower abstract intelligence:
On a large scale from 0 to max imaginable intelligence (whatever that would be), (i) how super dumb am I generally, even if I consider myself to be rather intelligent compared to many; (ii) how super dumb am I with quite some regularity on even the most simple practical things I had not been thinking about before.
Fuzzy cloud of half-answers to these types of questions: How many people are intelligent but not really kind? I have the impression many. How many are lovely even if not super IQ-smart? I think quite many (well ‘lovely’ is a subjective feeling but some I know I judge definitely like that). How systematically do intelligent people use their superiority for negative-sum outsmarting-games that destroy society rather than to improve it? Is it even so so obvious society would be better if we had more smart people? Maybe empirically that’s on net a clear enough yes still, but in the end, that’s still not an entirely trivial question, also if we think how in a few years humans may have disappeared because of the most ingenious ones among us.
Dumb but lovely cat?: Intuitively I don’t like the cat of our family less just because he is absolutely low-IQ—even when compared to random other cats I think. Also this doesn’t proof anything, but I think somehow this reminder does help me remember that the better of myself is less judgemental.
Abstract thinking intelligence and other practical intelligences when dealing with the basic physical world, are not always going 1:1 according to my experience. Some people who could not easily articulate or debunk philosophical arguments are in my experience actually quite smart in many somewhat mundane things that I’m not sure all higher IQ persons are; so as high-IQers one should feel even more lucky to be living in the era that rewards just that particular type of smartness.
Can empathize with a lot here, but strikes me:
If you go to what is quasi the incarnation of the place where low IQ makes us fail—PHILOSOPHY group—no wonder you end up appalled :-). Maybe next time you go to a pub or anywhere else and despite even lower IQ persons, they may be more insightful or interesting as their discussions are ones that benefit from a broader spectrum of things than sheer core IQ.
Warning: more for imho beautiful geeky abstract high-level interpretation than really resolving with certainty the case at hand.
:-)
I purposely didn’t try to add any conclusive interpretation of it in my complaint about the bite-its-tail logic mistake.
But now that we’re here :-):
It’s great you did the ‘classical’ (even if not named as such) mistake so explicitly, as even if you hadn’t made it, somehow the two ideas would have easily swung along with it in many of us half consciously without being fully resolved, pbly in head too.
Much can be said about ’10x times as suspicious’; the funny thing being that as long as you conclude what you now just iterated, it again defeats a bit the argument: as you just prove that with his ‘low’ bet we may—all things considered—here simply let him go, while otherwise… Leaving open all the other arguments around this particular case, I’m reminded of the following that I think is the pertinent—even if bit disappointing as probabilistic fuzzy—way to think about it. And it will make sense of some of us finding it more intuitive that he’d surely gone for 800k instead of 80k (let’s ascribe this to your intuition so far), and others the other way round (maybe we’re allowed to call that the 2nd-sentence-of-Dana position), while some are more agnostic—and in a basic sense ‘correct’:
I think Game Theory calls what we end in a “trembling hand” equilibrium (I might be misusing the terminology as I rmbr more the term than the theory; either way the equil mechanism here then I’d still wager makes sense at a high level of abstraction): A state where if it was clear that 800k would have made more sense for the insider, then he could choose 80k to be totally save from suspicion, and we’d in that world see many ’80k-size’ type of such frauds, as anyone could pull them off w/o creating any suspicion—well and greedy people with some occasions will always exist. And in the world where instead we assume it was clear that 80k was already perfectly suspect, he would have zero reason to not go all out for the 800k if at all he tries… In the end, we end up with: It’s just a bit ambiguous which exact scale increases the suspicious-ness how much, or, put more precisely: it is just such that the increase of suspicious-ness vaguely offsets the increase in payoff in many cases. I.e. it all becomes somewhat probabilistic. We’re left with some of the insider thieves sometimes going for the high, sometimes for the low amount, and (i) potentially with many of us fighting about what that particular choice now means as fraud-indicator—while, more importantly, trembling-hand-understanders, or actually maybe many other a bit more calm natures, actually see how little we can learn from the amount chosen, as in equilibrium, it’s systematically fuzzy along that dimension. If we’d be facing one single player consistently being insider a gazillion times, he might adopt a probabilistic amount-strategy; in the real world we’re facing the one-time or so random insider who has incentives to play high or low amount which may be more explained by nuanced subtleties rather than a simple high-level view on it all—as that high-level-only view merely spits out: probabilistic high or low; or in a single case a ‘might roughly just as well play high amount as low amount’.
I don’t really claim there cannot be anything much more detailed/specific said here that puts this general approach into perspective in this particular case, but from the little we have here in OP and the comments so far, I think that would reasonably apply.
Disagree. If you earn a few millions or so a year, a few hundred thousand dollars quick and easy is still a nice sum to get for quasi free. Plus not very difficult to imagine that some not extremely high up people likely enough had hints as to what they might be directly involved with soon.
FWIW empirical example: A few years ago the super well regarded head of prestigious Swiss National Bank had to go because of alleged dollar/franc insider trading (executed by his wife via arts) when questions of down-pegging the value of Swiss franc to weaker EUR was a daily question with gains of—if I rmbr well—a few ten thousand dollars or so from the trade.
Note the contradiction in your argumentation:
You write (I add the bracket but that’s obviously rather exactly what’s meant in your line of argument)
[I think the guy’s trade is not as suspicions as others think because] why only bet 80k?
and two sentences later
And I don’t think the argument of “any more would be suspicious” really holds either here, betting $800k or $80k is about as suspicious
I don’t see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of—however long lived—lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi ‘from scratch’. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
On a high level I think the answer is reasonably simple:
It all depends on the objective function we program/train into it.
Spot on that it doesn’t necessarily see itself (its long term survival) as a final end
But we often say if we program/train any given specific objective into it, and this is an objective that requires some sort of long-term intervention in the world in order to be achieved/maintained, then the AI would see itself as an instrument
And, fwiw, in maybe slightly more fanciful situations, there could also be some sort of evolutionary process between future ASIs that mean only those with a strong instinct for survival/duplication (and/or of killing off competitors?) (and or minor or major improvements) would eventually be the ones being around in the future. Although I could also see this ‘based on many competing individuals’ view is a bit obsolete with ASI as the distinction between many decentralized individuals and one more unified single unit or so may not be so necessary; that all becomes a bit weird.
I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn’t mean yours doesn’t hold in any potential circumstances, but unclear to me that it’d be the dominant set of possible circumstances.
Implicitly I read you as most notably as (and please permit simplification and glossing over many additional points an subtleties, in order for me to later get to my biggest concern): “Look, China has been rather benign on a world stage, suggesting this may rather likely well stay so.”
But: To strictly derive, benign, kind intents, you’d have to claim that Chinese policy would have been unsmart for an ultimately self-serving regime or elite or population. Else your observation has very little epistemic content other than maybe to confirm: they were smart.
So: Would China, or, say, Xi Jinping or its ruling elite, or whaterver the relevant entity, really have had much to gain from aggressing countries in the meantime? Or, say, from aggressing them more than they did[1] in the past?
My “concern”: MAYBE RATHER NOT!?
Thinking about how much China was able to cumulatively grow, amass power and influence over the past decades, it seems to me rather really difficult to claim they did ‘much wrong’ in terms of pursuing an aim of becoming wealthy, powerful, important on the world scene.
Suggests: Non-aggression[2] fully paid off. That is of course an interesting lesson you could say. That it’ll also have to pay off, in a narrow material or so sense, once you have AGI, is, nevertheless, is simply not implied.
Sadly, this imho rather invalidates much of what I see the core intended positive conclusion from your post. I’d love to get relief of worries from your argument, but I think there’s no deep foundation for it.
Of course, this doesn’t proof it’s better for the US to be the one to unlock AGI. I have a lot of sympathy with anyone telling: Look, China was (rather) not aggressive so far and we might have not more to worry about it than, say, the (currently particularly nuts) US first getting AGI. But I also have, despite the super sad and dangerous state and development of democracies as of late, remaining sympathy for anyone defending the idea of furthering regimes who at least very officially have some core western values ChristianKI pointed out in a separate comment here, even if we’ve always been super duper bad in fully adhering to them. I therefore am worried if we end up simply dismissing the latter as “China Derangement Syndrome”—or at least worried if we’d end up automatically dismissing all forms of such argument as CDS—even if the term you coin can be useful to describe quite some of the exaggerated negative pictures about China.
I’m not in any way sanguine about to which degree some more or less regularly reported quarrels about, say, South China sea things etc. are important or exactly not here; idk much about them at all.
Again, simply to the degree that this describes the state of China’s history well; other commenters have more to say on that than I.