The automated reviewer is more prudish! Sonnet happily made me Less Schlong, but APPARENTLY some parts of our natural human anatomy are unwelcome here
tslarm
You’re talking down to this person (far too much, IMO; just make your arguments or don’t, and to the extent that you’re unwilling to engage, don’t fill the gap with status games), while also failing to demonstrate that you’ve taken critiques of free market capitalism at all seriously.
OK. Say the average founder can capture 10% of the value he creates. That’s a preposterously high amount, but we’ll use it to make the math easier. This means that once he creates $100M of value there is no reason for him to use his various gifts to create more. Any additional value he creates will get him a reputation of obscenity, and he’ll be ritually stripped of his portion of it by a spiteful populace. Why would anyone do that? I would make sure to keep my contributions to humanity very small and local, only benefiting myself and my community at most, so that I wouldn’t be in danger of helping too many people and becoming obscene.
The idea that wealth earned in a capitalist economy reliably represents ‘value created’, in the sense we actually care about, is one of the main points of contention here! It’s something to be argued for, not taken as an axiom and used to dismiss their concerns as if they’re stupid.
While it is interesting, I don’t think I understand the connection between this illusion and illusionism.
I can identify the shape of the card, and I can see the marks on it but I cannot tell what color they are; they don’t seem one way or the other. What is the quale associated with the card? If the quale is red or black, then that conflicts with the way the card seems to me. If the quale does not have a defined color, that conflicts with the way my peripheral vision seems to be in full color. In either case, it seems I am deeply wrong, not just about what my eyes can physically process, but about the phenomenal contents of my experience.
You say that you can’t tell what the colour is, and it doesn’t seem to be one colour or the other. So it seems clear to me that you’re having neither a ‘red quale’ nor a ‘black quale’. You say that this conflicts with your impression that your peripheral vision is in full colour, and I guess I just don’t see the problem here.
If you were able to accurately identify the colour without consciously seeing it, then that would be a weird, blindsight-like phenomenon, but not a threat to the existence of qualia. You say that you’re not able to accurately identify the colour at this point, so the situation is weird in a different way, but still not a threat to the existence of qualia—only to the coherence of your meta-qualia (specifically your feeling that your peripheral vision is in full colour) with your visual qualia (which don’t have a colour attached to the card). I can see how that would threaten a very specific version of qualia realism, but not how it would threaten qualia realism in general.
Maybe an issue at your end?
It looks normal for me (10xed screenshot from the original, not from your quote):
edit: that’s from Chrome, but it’s normal for me in Firefox too.
The OP’s suggestion is an Armageddon variant, though, so you’re guaranteed a non-draw result after 1 game. Alternating games (even Armageddon games) could leave the players tied indefinitely.
Thanks for clarifying! I’m still pretty sceptical about those options, because it was already public knowledge that the DoW had set Anthropic a deadline of Friday evening to comply or face the consequences. And the substance of the dispute was already publicly known. But I do take your point that this public statement was a choice.
Given the line the DoW has taken, what non-escalatory response was available to Anthropic other than total capitulation?
Yes, I get it, I’m very ignorant. (If you needed to get that off your chest, you could perhaps have said it directly in one sentence, rather than spending 10000 words patiently implying it.) But you’re still handwaving the interesting parts.
Obviously “I am in the first 10% of people” is a prediction; I already agreed to rephrase it as “I will eventually turn out to have been in the first 10% of people”. I’m not trying to deduce anything from the fact that it ‘sounds implausible’, and I’m not trying to bring any information back in time from the moment it turns out to be true or false in my case. I’m noting that it will definitely turn out to be false from the perspective of 90% of people who ever live, and asking why *this* fact is obviously irrelevant to the credence I should give it.
The answer is not “bayesianism, obviously”. Bostrom, even back when he was writing about this stuff, was not a heathen frequentist, and he wasn’t as stupid as me. (I’m pretty sure he’d even heard of causality.)
I’m a little puzzled why I’m having to point this out
You’re having to point it out because you kept emphatically insisting on the opposite! But now you’ve clarified that obviously we can and do have evidence about future events that are not fully predictable, I don’t understand how this strand of your argument holds together. It was presented as support for this claim:
Statements like “by definition, “I am in the first 10% of people” is false for most people” are incompatible with Bayesianism: you just broke one of its fundamental assumptions: causality. What you meant was “By definition, “He was in the first 10% of people” will, once we’re extinct, turn out to have been false for most people.” — I hope that careful distinction makes it entirely clear why the Doomsday Argument is nonsense?
You haven’t explained why that temporal distinction is so crucial, and why this rephrasing doesn’t serve the same purpose as the original statement in the doomsday argument:
“By definition, “I will eventually turn out to have been in the first 10% of people” will eventually turn out to have been false for most people”
As far as I’m concerned, “I will eventually turn out to have been in the first 10% of people” is obviously what “I am in the first 10% of people” meant in the first place. So what’s the important difference here?
(All claims about the future are claims about what will eventually turn out to be the case, and arguably all are also claims about what will eventually turn out to have been the case, i.e. that present conditions were such as to lead to the later outcomes. I feel like maybe there’s an important disagreement, or misunderstanding on my part, adjacent to this, but I can’t pin it down based on what you’ve written.)
One thing I should check, since we got tripped up once on absolutes: are you saying the doomsday argument is simply invalid and has literally no bearing on your probabilities? Or are you saying it has non-zero but negligible force?
(I didn’t downvote you, by the way; although we’re evidently both finding this a bit frustrating, I appreciate your sincere engagement throughout this discussion! No pressure to keep responding, though, if you feel it’s no longer worthwhile.)
I still feel like you’re focusing mainly on refuting things I haven’t said and don’t think, but, in any case, this is just obviously untrue:
Now, apply all the relevant evidence we have accumulated so far to these priors, using Bayes rule. Which is: none whatsoever.
we have no evidence, and we know for a fact that the evidence doesn’t yet exist so we can’t just go find it
I’d prefer to stick to the actual range of possible futures, rather than artificially limiting it to two extreme cases, but regardless—are you really saying nothing we know, and nothing we might conceivably discover, could update us in one direction or the other? That if, tomorrow, you learn that a rogue ASI has already begun construction of a carbon-fibre paperclip factory and has declared its intention to convert every human into paperclips by 2031, this is irrelevant because information can’t flow backwards in time?
First, I sympathise! And I don’t think your sadness is a mistake; I grew up with quite fuzzy religious beliefs and moved away from them gradually, but if I had been a true believer I think I would have felt a great sense of loss. And I’ve certainly felt (and sometimes continue to feel) a range of negative existential emotions that could have been comforted or completely obviated by religious belief. But I don’t think your new worldview has the ramifications you’re implying it does.
Why should anything matter if we’re just arbitrary self-reinforcing arrangements of chemicals?
We’re not “just” that, unless you’re taking for granted that “arbitrary self-reinforcing arrangements of chemicals” have the capacity for joy, suffering, love, creativity, kindness, and so on and so on. And if you are taking that for granted, well, the question almost answers itself! If someone’s suffering and you comfort them, or if you have a family and you love them, why on earth shouldn’t that matter?
It matters to those other people, and I’m guessing that, at least at some level, it also still matters to you. I think you can choose to embrace that, rather than talking yourself out of it just because there’s probably no divine third party who also cares.
I’m afraid I don’t get this at all; I still have no idea why the second paragraph is relevant or why you think I’m building into my priors the assumption that I have access to information that I don’t know and couldn’t predict. I think it’s completely normal to consider predictions about the future to have truth values regardless of whether the eventual outcome could be calculated with certainty now. I think you’re saying that a prediction about the future only has a truth value if the person making it (or someone else who lives at the same time as them?) could, at least in theory, determine that truth value now. If so, and if that’s crucial to the point you’re making, then that’s the part I need you to explain/defend.
But if I assume you’re right and change the statement to “By definition, “I will eventually turn out to have been in the first 10% of people” will eventually turn out to have been false for most people”, what changes and why does this render the argument nonsense?
edit: one thing that could point to an important disagreement is the phrase “or even estimating it”. If you mean we are in a state of complete ignorance with respect to the eventual truth or falsehood of statements about the eventual total number of people, i.e. we currently have no relevant information, I think that’s obviously false. I can argue the point if needed, but first I want to check if that is what you mean.
We don’t need to deny that there’s a meaningful first-person perspective, only that any particular first-person perspective is special (in this case, special in that it’s the ‘true’ continuation of the original). When a perfect copy is made, two meaningful first-person perspectives exist, they both see themselves as continuations of the original, and neither is more right or wrong than the other in any deep sense.
From my perspective, unless there is something akin to a soul or disembodied consciousness, there’s simply no fact of the matter here beyond the more granular facts. I don’t think “the same consciousness” means anything more than the prosaic ways in which we might define it, e.g. with reference to overlapping chains of experience and memory. We instinctively care about it, but I think that’s fairly easy to explain as a byproduct of our self/other distinction and self-preservation instinct and so on.
When you ask “would this be the same conscious experience”, do you have a clear idea of what “the same conscious experience” means, and you’re wondering whether the world works in such a way that the concept would hold in these hypotheticals? Or is it a concept that feels important, but which you can’t pin down, and your goal here is to analyse it?
I think that’s a reasonable interpretation of the actual serious content of the post, and my understanding of Eliezer’s position basically matches yours. But the post starts like this:
tl;dr: It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with slightly more dignity.
And it sticks with the ‘death with dignity’ framing, talking about doubling our chances of survival from 0% to 0%, striving to earn ‘dignity points’ to take to our graves, and so on. It’s also explicitly presented as a MIRI thing, not just a personal Eliezer thing.
Underneath all this, of course, he is talking about doing actually useful things to increase our survival odds (albeit from ‘negligible’ to ‘still basically negligible’). But both the surface-level framing and the actual emotional content are drenched in despair and fatalism.
It’s probably clear that I think the ‘death with dignity’ framing is bad and unhelpful, but obviously I can’t be sure that Eliezer’s post did more harm than good. You and Harlan evidently hate being cast as inevitable-doomers, though, and if this was an unfair and harmful move on the part of Amodei, I think communications like the Death With Dignity post are partly to blame for making it a viable one.
Harlan 0:17:13
A really repeated theme is the inevitability thing. It’s pretty frustrating to hear, as someone who’s spending effort trying to help with this stuff in some kind of way that we can, and for someone to characterize your camp as thinking doom is inevitable. If I thought it was inevitable, I would just be relaxing. I wouldn’t bother doing anything about it. There’s some sense in which if it was inevitable, that would be worse, but it would also mean that we didn’t really have to do anything about it.Liron 0:17:42
Just to repeat your point in case viewers don’t get the connection: Dario is saying that doomerism is so unproductive because the Yudkowskis of the world — he doesn’t explicitly name Yudkowsky, but he’s basically saying our type — we think that we’re so doomed that we’re just fear-mongering, and it’s pointless.Eliezer played directly into this with his Death With Dignity “”joke”″. The past is past, but if you guys haven’t yet said openly and plainly “fuck that, it was the product of personal exhaustion and being too cute by half, and whatever level of irony it was on it doesn’t represent us”, maybe that would be worth doing.
(It was just an April Fool’s joke wouldn’t count, because the post obviously had an element of “ha ha only serious”. By design, the serious meaning was impossible to pin down, but to pretend the whole thing was simple first-level irony would be insulting.)
But you are implicitly assuming that you already know this process is in fact going to continue. So it’s rather as if you asked Fred, and he told you yeah, there’s always a big rush at the end of the day, few people get here as early as you.
I didn’t mean to imply certainty, just uncertain expectation based on observation. Maybe I asked Fred, or the other customers, but I didn’t receive any information about ‘the end of the day’—only confirmation of the trend so far.
(I’m not trying to be difficult for the sake of it, by the way! I just want to think these things through carefully and genuinely understand what you’re saying, which requires pedantry sometimes.)
edit in response to your edit:
But if you know for a fact that all the customers are only 10 minutes old (including you) so decided to come here less than 10 minutes ago, then the only reasonable assumption is that there’s a very fast population explosion going on, and you have absolutely no idea how much longer this is going to last, or how soon Fred will run out of chili and close the shop. In that situation, your predictability into the future is just short, and you just don’t know what’s going to happen after that — and clearly neither does Fred, so you can’t just ask him.
I think I’m not quite understanding the distinction here. Why is there an important difference between “this trend is based on mechanisms of which I’m ignorant, such as the other customers’ work hours or their expectations about chili quality over time” and “this trend is based on different mechanisms of which I’m also ignorant, i.e. birth rates and chili inventory”?
But that’s not how I’m thinking of it in the first place—I’m not positing any random selection process. I just don’t see an immediately obvious flaw here:
by definition, “I am in the first 10% of people” is false for most people
so I should expect it to be false for me, absent sufficient evidence against
And I still don’t quite understand your response to this formulation of the argument. I think you’re saying ‘people who have ever lived and will ever live’ is obviously the wrong reference class, but your arguments mostly target beliefs that I don’t hold (and that I don’t think I am implicitly assuming).
I’m not necessarily endorsing this (haven’t re-read it yet and don’t remember fully agreeing with it), but it immediately came to mind: https://www.astralcodexten.com/p/give-up-seventy-percent-of-the-way. It’s specifically about the process by which words become slurs, but it directly addresses the question of when to go along with the change and when not to.