Just wanted to say I signed up for a trial on the strength of this pitch, so well done! It sounds like something that could be really useful for me.
frankybegs
“it is a straightforwardly observable fact that, for many people, their shoulder advisors occasionally offer thoughts and insights that the people literally would not have thought of, otherwise.”
How can this be observable, let alone straightforwardly?
Apologies, I just read your reply to Joseph C.
I would like to request the information, your reservations notwithstanding. I am happy to sign a liability waiver, or anything of that nature that would make you feel comfortable. I am also happy to share as much data as it is feasible to collect, and believe I could recruit at least some controls. As I mention above, I don’t think I’ll be able to implement the intervention in its entirety, given practical and resource constraints, but given your stated interest in a ’1000 ships’ approach this seems like it could be a positive for you.
I certainly don’t “see what I’m doing”, because I wasn’t trying to do anything other than explain why your engagement with STMT seemed combative and unfairly accusatory. It did, and it does, reading it later. I hope/suspect that with the advantage of the same temporal remove, you will also see exactly why I and many others thought so.
I don’t know enough to have a valuable opinion on the wider argument, but this sentence:
“EY is a smart guy and I’m sure he could contribute to accelerating AI if he wanted to, but I don’t think him withholding information from us does anything to delay AI.”
seems straightforwardly self-contradictory.
No.
Well this isn’t helpful! I was genuinely trying to understand what the point of the quoted statement is. In the context, it seemed like that was the most reasonable interpretation. If it isn’t, then it’d be more productive to explain what you did mean.
I’m sorry that you feel misrepresented. For me, continuing to argue (in response to criticism or otherwise) that there is something wrong with STMT not taking the bet, and that their stated reason is insufficient, and making what read to me like implicit accusations of dishonesty, seems a lot like ‘making combative noise’. It’s quite an imprecise charge, though, and perhaps unhelpful of me to make.
Anyway, I certainly don’t want to be making combative noise, and policing your tone isn’t really adding anything to the (important) object-level discussion, so I’ll beat a retreat.
I just want to say I don’t think that was unclear at all. It’s fair to expect people to know the wider meaning of the word ‘alien’.
I agree on the latter example, which is a particularly unhelpful one to use unless strictly necessary, and not really analogous here anyway.
But on the lock example, what is the substantive difference? His justification seems to be ‘it was easy to do, so there’s nothing wrong with doing it’. In fact, the only difference I detect makes the doxxing look much worse. Because he’s saying ‘it was easy for me to do, so there’s nothing wrong with me doing it on behalf of the world’.
So while it’s also heat-adding, on reflection I can’t think of any real world example that fits better: wouldn’t the same justification apply to the people who hack celebrities for their private photos and publicise them? Both could argue:
It was easy for me (with my specialist journalist/hacker skills) to access this intended-to-be-private information, so I see no problem with sharing it with the world, despite the strong, clearly expressed preference of its subject that I not do so.
That seems to generalize to “no-one is allowed to make any claim whatsoever without consuming all of the information in the world”.
I would say that it generalises to ‘one shouldn’t make a confident proclamation of near-certainty without consuming what seems to be very relevant information to the truth of the claim’. Which I would agree with.
I think what is missing here is that this debate has been cited repeatedly in rationalist spaces, by people who were already quite engaged with the topic, familiar with the evidence, and in possession of carefully-formed views, as having been extremely valuable and informative, and having shifted their position significantly. I think it’s reasonable to expect someone to consume that information before claiming near-certainty on the question.
> Reasonable norms of good debate suggest relevant counterarguments should be proportional in length and readability to the original argument, which in this case is Rokos compact nine-minute post.
This seems entirely *un*reasonable to me. Some arguments simply can’t be properly made that concisely, and this principle seems to bias us towards finding snappy, simplistic explanations rather than true ones.
Someone else mentioned ‘The Pyramid and the Garden’, but I’m reminded of the sort of related argument about Atlantis in https://slatestarcodex.com/2016/11/16/you-are-still-crying-wolf/ : sometimes, boring reality and ‘it’s actually just a series of coincidences’ requires a lot more explaining than a neat little conspiracy theory. Not to tar the lab leak hypothesis by calling it a conspiracy theory- while of course it literally is one, it doesn’t deserve to be demeaned by the term’s modern connotation of zany insane-person nonsense- but it is easy to see why ‘these facts seems unlikely to be a coincidence’ might be easier to argue concisely than its rebuttal, and a norm where the person that can state their argument more persuasively in shortform wins doesn’t seem like one that’s going to promote optimal truth-seeking.
I don’t think someone should need to pay you thousands of dollars to engage with full arguments for and against a proposition before you claim near-certainty about it. It’s just sort of a pre-requisite for having that kind of confidence in your belief, or having it be taken seriously. Perhaps particularly when you’re not only disagreeing with the expert consensus, but calling that expertise into question because they disagree with you.
You specify a style for citation! By your own logic, this should be of academic-level rigour, surely? Pleading ‘oh it isn’t supposed to be convincing’ is the exact same motte and bailey that Matt Walker is doing with his pop-sci that he self-cites.
This is an amazing bit of work, and one of the main reasons I come to LW is to find interesting, well-supported arguments that make me revise or at least question what I believe about important stuff. This does that, and I want to send it to everyone I know. But it’s hard to do so when you undermine your credibility at points (in basically the ways that All-American Breakfast has outlined).
People are going to be motivated to preserve their dearly held beliefs about sleep, and you give them unnecessary ammo to dismiss you as an internet crazy.
I sort of went the other way from most people, in that while I came in thinking blackmail should be illegal (which I think is true of almost everyone who hasn’t really considered it in depth), I immediately was sympathetic to Robin’s argument.
But actually, by the end, I was more firmly convinced of the desirability of illegality. Zwi’s point about incentives is the most important consideration, I think: the prohibition of the most powerful material incentive to obtain and release information will make the average information release much likelier to be morally motivated, which in turns makes it more likely to be the kind of information release we want. Robin’s main contention, that it it’s a strange, arbitrarily one-sided sort of a rule, seems comparatively unimportant if it produces better outcomes.
The problem with “lab-leak is unlikely, look at this 17-hour debate” is that it is too short an argument, not a too long one.
It isn’t an argument, it’s a citation.
I don’t think a 17 hour debate is “inaccessible” to someone who is invested in this issue and making extremely strong, potentially very seriously libellous claims without having investigated some of the central arguments on the question at hand.
A foundational text in some academic field might take 17 hours to read, but you would still expect someone to have read it before making a priori wild claims that contradicted the expert consensus of that field very radically. I don’t think you’d take that person seriously at all if they hadn’t, and would in fact consider it very irresponsible (and frankly idiotic) for them to even make the claims until they had.
That’s not to say that this debate should be treated as foundational to the study of this question, exactly, but… well, as I said elsewhere:
This debate has been cited repeatedly in rationalist spaces, by people who were already quite engaged with the topic, familiar with the evidence, and in possession of carefully-formed views, as having been extremely valuable and informative, and having shifted their position significantly.”
I think that makes familiarising yourself with those arguments (whether from the debate or another equivalent-or-better source) a prerequisite for making the kind of strong, confident claims Roko is making. At the moment, he’s making those claims without the information necessary to be anywhere near as confident as he is.
Your post appears to, by repeatedly emphasising the distance in the context of arguing that a zoonotic origin is unlikely.
It does remove the flaw, because it’s a thought experiment. It doesn’t have to be plausible. It merely tests our evaluative judgements and intuitions.
I think you need to read more of the writings here re: scepticism of one’s own beliefs
One specific thing that I’d definitely have challenged is the ‘I don’t think the New Yorker article was very fair to my point of view’. What point of view, specifically, and how was it unfair? Again, very much easier from the comfort of my office than in live conversation with him, but I would have loved to see you pin him down on this.
Sorry if I’ve missed something about this elsewhere, but is it possible to explain what it involves to people who aren’t going to properly do it?
I don’t have 4+ hours a day to spare at the moment, nor $10k, but I’d love to know what the intervention involves so I can adopt as much of it as it is feasible to do (given it sounds like a multi-pronged intervention). Unless there’s reason to think it only works as an all-or-nothing? Even just the supplements on their own sounds like they might be worth trying, otherwise.
Isn’t the fact that it’s the largest wet market in central China relevant here? Surely that greatly increases the chance of it travelling to Wuhan specifically in a zoonotic origin scenario, because animals are brought there from all around.