FYI I am on that list and fine with it—curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto
I think you’re wrong on multiple counts. Will reply more in a few hours.
FYI I am on that list and fine with it—curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto
I think you’re wrong on multiple counts. Will reply more in a few hours.
I went through the maths in OP and it seems to check out. I think the core inconsistency is that Solomonoff Induction implies which is obviously wrong. I’m going to redo the maths below (breaking it down step-by-step more). curi has which is the same inconsistency given his substitution. I’m not sure we can make that substitution but I also don’t think we need to.
Let and be independent hypotheses for Solomonoff induction.
According to the prior, the non-normalized probability of (and similarly for ) is: (1)
what is the probability of ? (2)
However, by Equation (1) we have: (3)
thus (4)
This must hold for any and all and .
curi considers the case where and are the same length, starting with Equation (4), we get (5):
but (6)
and (7)
so: (8)
curi has slightly different logic and argues which I think is reasonable. His argument means we get . I don’t think those steps are necessary but they are worth mentioning as a difference. I think Equation (8) is enough.
I was curious about what happens when . Let’s assume the following: (9)
so, from Equation (2): (10)
by Equation (3) and Equation (10): (11)
but Equation (9) says --- this contradicts Equation (11).
So there’s an inconsistency regardless of whether or not.
lsusr said:
(1) Curi was warned at least once.
I’m reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I’ll check, though.
There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY
gjm said:
I had not looked, at that point; I took “mirrored” to mean taking copies of whole discussions, which would imply copying other people’s writing en masse. I have looked, now. I agree that what you’ve put there so far is probably OK both legally and morally.
My apologies for being a bit twitchy on this point; I should maybe explain for the benefit of other readers that the last time curi came to LW, he did take a whole pile of discussion from the LW slack and copy it en masse to the publicly-visible internet, which is one reason why I thought it plausible he might have done the same this time.
I don’t think there is case for (1). Unless gjm is a mod and there are things I don’t know?
lsusr said:
(2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation.
habryka explicitly mentions curi changing his LW commenting policy to be ‘less demanding’. I can see the motivation for expedition, but the mods don’t have to speedrun it. I think it’s bad there wasn’t any communication beforehand.
lsusr said:
(3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi’s profile and even curi’s response you linked to that curi is damaging to productive dialogue on Less Wrong.
I don’t think that’s the case. His net karma has increased, and judging him for content on his blog—not his content on LW—does not establish whether he was ‘damaging to productive dialogue on Less Wrong’.
His posts on less wrong have been contributions, for example, www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental is a direct response to of EY’s posts and it was net-upvoted. He followed that up with two more net-upvoted posts:
This is not the track record of someone wanting to waste time. I know there are disagreements between LW and curi / FI. If that’s the main point of contention, and that’s why he’s being banned, then so be it. But he doesn’t deserve to mistreated and have baseless accusations thrown at him.
lsusr said:
The strongest claim against curi is “a history of threats against people who engage with him [curi]”. I was able to confirm this via a quickly glance through curi’s past behavior on this site. In this comment threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him.
We have substantial disagreements about what constitutes a threat, in that case. I think a threat needs to involve something like danger, or violence, or something like that. It’s not a ‘threat’ to copy public discussion under fair use for criticism and commentary.
I googled the definition, and these are the two (for define:threat
)
a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
a person or thing likely to cause damage or danger.
Neither of these apply.
I’m not sure about other cases, but in this case curi wasn’t warned. If you’re interested, he and I discuss the ban in the first 30 mins of this stream
FYI and FWIW curi has updated the post to remove emails and reword the opening paragraph.
http://curi.us/2215-fallible-ideas-post-mortems and http://curi.us/2215-fallible-ideas-post-mortems#18059
testing \latex \LaTeX
does anyone know how to label equations and reference them?
@max-kaye u/max-kaye https://www.lesswrong.com/users/max-kaye
This is commentary I started making as I was reading the first quote. I think some bits of the post are a bit vague or confusing but I think I get what you mean by anthropic measure, so it’s okay in service to that. I don’t think equating anthropic measure to mass makes sense, though; counter examples seem trivial.
> The two instances can make the decision together on equal footing, taking on exactly the same amount of risk, each- having memories of being on the right side of the mirror many times before, and no memories of being on the wrong- tacitly feeling that they will go on to live a long and happy life.
feels a bit like like quantum suicide.
note: having no memories of being on the wrong side does not make this any more pleasant an experience to go through, nor does it provide any reassurance against being the replica (presuming that’s the one which is killed).
> As is custom, the loser speaks first.
naming the characters papers and scissors is a neat idea.
> Paper wonders, what does it feel like to be… more? If there were two of you, rather than just one, wouldn’t that mean something? What if there were another, but it were different?… so that-
>
> [...]
>
> Scissors: “What would it feel like? To be… More?… What if there were two of you, and one of me? Would you know?”
isn’t paper thinking in 2nd person but then scissors in 1st? so paper is thinking about 2 ppl but scissors about 3 ppl?
> It was true. The build that plays host to the replica (provisionally named “Wisp-Complete”), unlike the original’s own build, is effectively three brains interleaved
Wait, does this now mean there’s 4 ppl? 3 in the replica and 1 in the non-replica?
> Each instance has now realised that the replica- its brain being physically more massive- has a higher expected anthropic measure than the original.
Um okay, wouldn’t they have maybe thought about this after 15 years of training and decades of practice in the field?
> It is no longer rational for a selfish agent in the position of either Paper nor Scissors to consent to the execution of the replica, because it is more likely than not, from either agent’s perspective, that they are the replica.
I’m not sure this follows in our universe (presuming it is rational when it’s like 1:1 instead of 3:1 or whatever). like I think it might take different rules of rationality or epistemology or something.
> Our consenters have had many many decades to come to terms with these sorts of situations.
Why are Paper and Scissors so hesitant then?
> That gives any randomly selected agent that has observed that it is in the mirror chamber a 3⁄4 majority probability of being the replica, rather than being the original.
I don’t think we’ve established sufficiently that the 3 minds 1 brain thing are actually 3 minds. I don’t think they qualify for that, yet.
> But aren’t our consenters perfectly willing to take on a hefty risk death in service of progress? No. Most Consenters aren’t. Selling one’s mind and right to life in exchange for capital would be illegal.
Why would it be a hefty risk? Isn’t it 0% chance of death? (the replicant is always the one killed)
> In a normal mirror chamber setup, when the original enters the mirror chamber, they are confident that it is the original who will walk out again. They are taking on no personal risk. None is expected, and none is required.
Okay we might be getting some answers soon.
> The obvious ways of defecting from protocol- an abdication of the responsibility of the consenter, a refusal to self-murder, an attempt to replicate without a replication license- are taken as nothing less than Carcony.
Holy shit this society is dystopic.
> It would be punished with the deaths of both copies and any ancestors of less than 10 years of divergence or equivalent.
O.O
> But if, somehow, the original were killed? What if neither instance of the Consenter signed for their replica’s execution, and the replica were left alive. That would not be considered Carcony. It would not even be considered theft- because a brain always belongs to its mind.
I’m definitely unclear on the process for deciding; wouldn’t like only one guillatine be set up and both parties affixed in place? (Moreover, why wouldn’t the replica just be a brain and not in a body, so no guillatine, and just fed visual inputs along with the mirror-simulation in the actual room—sounds feasilble)
> What if neither instance of the Consenter signed for their replica’s execution
Wouldn’t this be an abdication of responsibility as mentioned in the prev paragraph?
> So, do you see now? Do you see how Consenter Nai Paper-Chell-Glass-Stratton was motivated by a simple alignment of the payoff matrices?
Presumably to run away with other-nai-x3-in-a-jar-stratton?
> Paper: “You wouldn’t do that to me. Look… if you’re the original… And I do myself, and I’m the replica. I wont actually be dead, because if you destroy a representation of your brain case in a simulation that doesn’t actually destroy you in real life. I might not even desync. Whatever happens, I’ll still be alive after the penetration so I’ll know I’m the replica, but you might not know. It might look like I’m really dead. And you’ll have no incentive to follow through and do yourself at that point.”
> Scissors: “I still don’t see it.”
So both parties sign for the destruction of the replica, but only the legit Nai’s signing will actually trigger the death of the replica. The replica Nai’s signing will only SIMULATE the death of a simulated replica Nai (the “real” Nai being untouched) - though if this happened wouldn’t they ‘desync’ - like not be able to communicate? (presuming I understand your meaning of desync)
> Paper: ”… If you’re the replica, it doesn’t matter whether you do yourself, you’ll still get saved either way, but you’re incented not to do yourself because having a simulated spike stuck through your simulated head will probably be pretty uncomfortable. But also, if you’re the original, you’re sort of doomed either way, you’re just incented to run off and attempt Carcony, but there’s no way the replica would survive you doing that, and you probably wouldn’t either, you wouldn’t do that to me. Would you?”
I don’t follow the “original” reasoning; if you’re the original and you do yourself the spike goes through the replica’s head, no? So how do you do Carcony at that point?
> The test build is an order of magnitude hardier than Nai’s older Cloud-Sheet. As such, the testing armature is equipped to apply enough pressure to pierce the Cloud-Sheet’s shielding, and so it was made possible for the instances to conspire to commit to the legal murder of Consenter Nai Scissors Bridger Glass Stratton.
So piercing the sheilding of the old brain (cloud-sheet) is important b/c the various Nai’s (ambiguous: all 4 or just 3 of them) are conspiring to murder normal-Nai and they need to pierce the cloud-sheet for that. But aren’t most new brains they test hardier than the one Nai is using? So isn’t it normal that the testing-spike could pierce her old brain?
> A few things happened in the wake of Consenter Paper Stratton’s act of praxis.
omit “act of”, sorta redundant.
> but most consenter-adjacent philosophers took the position that it was ridiculous to expect this to change the equations, that a cell with thrice the mass should be estimated to have about thrice the anthropic measure, no different.
This does not seem consistent with the universe. If that was the case then it would have been an issue going smaller and smaller to begin with, right?
Also, 3x lattices makes sense for error correction (like EC RAM), but not 3x mass.
> The consenter union banned the use of mirror chambers in any case where the reasonable scoring of the anthropic measure of the test build was higher than the reasonable scoring of a consenter’s existing build.
this presents a problem for testing better brains; curious if it’s going to be addressed.
I just noticed “Consenter Nai Paper-Chell-Glass-Stratton”—the ‘paper’ referrs to the rock-paper-sissors earlier (confirmed with a later Nai reference). She’s only done this 4 times now? (this being replication or the mirror chamber)
earlier “The rational decision for a selfish agent instead becomes...” is implying the rational decision is to execute the original—presumably this is an option the consenter always has? like they get to choose which one is killed? Why would that be an option? Why not just have a single button that when they both press it, the replica is died; no choice in the matter.
> Scissors: “I still don’t see it.”
Scissors is slower so scissors dies?
> Paper wonders, what does it feel like to be… more? If there were two of you, rather than just one, wouldn’t that mean something? What if there were another, but it were different?… so that-
I thought this was Paper thinking not wondering aloud. In that light
> Scissors: “What would it feel like? To be… More?… What if there were two of you, and one of me? Would you know?”
looks like partial mind reading or something, like super mental powers (which shouldn’t be a property of running a brain 3x over but I’m trying to find out why they concluded Scissors was the original)
> Each instance has now realised that the replica- its brain being physically more massive- has a higher expected anthropic measure than the original.
At this point in the story isn’t the idea that it has a higher anthropic measure b/c it’s 3 brains interleaved, not 1? while the parenthetical bit (“its brain … massive”) isn’t a reason? (Also, the mass thing that comes in later; what if they made 3 brains interleaved with the total mass of one older brain?)
Anyway, I suspect answering these issues won’t be necessary to get an idea of anthropic measure.
(continuing on)
> Anthropic measure really was the thing that caused consenter originals to kill themselves.
I don’t think this is rational FYI
> And if that wasn’t true of our garden, we would look out along the multiverse hierarchy and we would know how we were reflected infinitely, in all variations.
> [...]
> It became about relative quantities.
You can’t take relative quantities of infinities or subsets of infinities (it’s all 100% or 0%, essentially). You can have *measures*, though. David Deutsch’s Beginning of Infinity goes into some detail about this—both generally and wrt many worlds and the multiverse.
In what way are the epistemologies actually in conflict?
Well, they disagree on how to judge ideas, and why ideas are okay to treat as ‘true’ or not.
There are practical consequences to this disagreement; some of the best CR thinkers claim MIRI are making mistakes that are detrimental to the future of humanity+AGI, for **epistemic** reasons no less.
My impression is that it is more just a case of two groups of people who maybe don’t understand each other well enough, rather than a case of substantiative disagreement between the useful theories that they have, regardless of what DD thinks it is.
I have a sense of something like this, too, both in the way LW and CR “read” each other, and in the more practical sense of agreement in the outcome of many applications.
I do still think there is a substantive disagreement, though. I also think DD is one of the best thinkers wrt CR and broadly endorse ~everything in BoI (there are a few caveats, a typo and improvements to how-to-vary, at least; I’ll mention if more come up. The yes/no stuff I mentioned in another post is an example of one of these caveats). I mention endorsing BoI b/c if you wanted to quote something from BoI it’s highly likely I wouldn’t have an issue with it (so is a good source of things for critical discussion).
Bayes does not disagree with true things, nor does it disagree with useful rules of thumb.
CR agrees here, though there is a good explanation of “rules of thumb” in BoI that covers how, when, and why rules of thumb can be dangerous and/or wrong.
Whatever it is you have, I think it will be conceivable from bayesian epistemological primitives, and conceiving it in those primitives will give you a clearer idea of what it really is.
This might be a good way try to find disagreements between BE (Bayesian Epistemology) and CR in more detail. It also tests my understanding of CR (and maybe a bit of BE too).
I’ve given some details on the sorts of principles in CR in my replies^1, if you’d like to try this do you have any ideas on where to go next? I’m happy to provide more detail with some prompting about the things you take issue with or you think need more explanation / answering criticisms.
[1]: or, at least my sub-school of thought; some of the things I’ve said are actually controversial within CR, but I’m not sure they’ll be significant.
\usepackage{cleveref}
Cool, thanks. I think I was missing \usepackage{cleveref}
. I actually wrote the post in latex (the post for which I asked this question), but the lesswrong docs on using latex are lacking. for example they don’t tell you they support importing stuff and don’t list what is supported.
\Cref{eq:1} is an amazing new discovery; before Max Kaye, no one grasped the perfect and utter truth of \cref{eq:1}.
I use crefs in the .tex file linked above. I suppose I should have been more specific and asked “does anyone know how to label equations and reference them on lesswrong?” instead.
> CR says that truth is objective
I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it
Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can’t happen (the argument for why is in BoI—the beginning of infinity).
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything.
I think this is true of any two *rational* people with sufficient knowledge, and it’s rationality not bayesians that’s important. If two partially *irrational* bayesians talk, then there’s no reason to think they’d reach agreement on ~everything.
There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don’t agree on ~everything (but can get back to that state by talking more).
WRT “sufficient knowledge”: the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.
> taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept?
If it were meaningless I wouldn’t have had to add “in an absolute sense”. Just because an explanation is wrong in an *absolute* sense (i.e. it doesn’t perfectly match reality) does not mean it’s not *useful*. Fallibilism generally says it’s okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.
Since BoI there has been more work on this problem and the reasoning around when to call something “true” (practically speaking) has improved—I think. Particularly:
Knowledge exists relative to *problems*
Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don’t).
I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
I think he’s in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.
I don’t think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I’m making)
> there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
I think you misunderstand me.
let’s say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.
with something like this there will be lots of other goals, background goals, which we need to satisfy but don’t normally list. An example is that the pet doesn’t kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn’t a good solution.
there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn’t cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).
maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse
you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we’re not turning every solution into a single unit (e.g. your ‘happiness index’); we’re providing *decisive reasons* for why an option should or shouldn’t be included. We’ve also been using this term “happy” but it’s more than just that, it’s got other important things in there—the important thing, though, is that it’s your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)
this is the sort of case where there is there’s no gun to anyone’s head, but we can continue to refine down to a list of exactly **one** option (or zero). let’s say you wanted an animal you could easily play with → then rabbit,mouse are excluded, so we have options: cat,dog. If you’d prefer an animal that wasn’t a predator—both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you’re down to one. Let’s say the litter tray is the condition you imposed.
What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you’d prefer.
Note: for most things we don’t go to this level of detail b/c we don’t need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it’s not good, then you’ve added a new goal (if you weren’t originally mistaken, that is) and can go back to the list of other options.
Note 2: The method and framework I’ve just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:
Argument · Yes or No Philosophy, Curiosity – Rejecting Gradations of Certainty, Curiosity – Critical Rationalism Epistemology Explanations, Curiosity – Critical Preferences and Strong Arguments, Curiosity – Rationally Resolving Conflicts of Ideas, Curiosity – Explaining Popper on Fallible Scientific Knowledge, Curiosity – Yes or No Philosophy Discussion with Andrew Crawshaw
Note 3: During the link-finding exercise I found this: “All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy.” (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.
I’m curious about how a bayesian would tackle that problem. Do you just stop somewhere and say “the cat has a higher probability so we’ll go with that?” Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it’s possible to *always* do it for *all* problems? If that’s the case there would be a way to decisively reach a single answer—so no need for probability. (There’s always the edge case there was a mistake somewhere, but I don’t think there’s a meaningful answer to problems like “P(a mistake in a particular chain of reasoning)” or “P(the impact of a mistake is that the solution we came to changes)”—note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.
Why believe anything!
So we can make decisions.
The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility
Yes you do—you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don’t know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!
You can operate well amid uncertainty
Yes, I additionally claim we can operate **decisively**.
In conclusion I don’t see many substantial epistemological differences.
It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they’ll make significant progress b/c there are other more foundational problems.
I agree practically a lot of decisions come out the same.
I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
I don’t know why they would be risible—nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn’t going to turn all matter into paperclips. They’re important because they refute big parts of theories from thinkers like Bostrom. That’s important because time, money, and effort are being spent in the course of taking Bostrom’s theories seriously, even though we have good reasons they’re not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That’s a problem which would actually lead to the creation of an AGI.
Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it’s silly, then you’re either irrational or you have a good, robust reason it’s not true.
[...] and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
He doesn’t claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn’t presume it needs to be raised like a human child or take the same resources/attention/etc.
Have you read much of BoI?
But maybe there could be something reasonably describable as a bayesian method. But I don’t work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
I don’t know how you’d describe Bayesianism atm but I’ll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.
both CR and Bayesianism answer Qs about knowledge and judging knowledge; they’re incompatible b/c they make incompatible claims about the world but overlap.
CR says that truth is objective
explanations are the foundation of knowledge, and it’s from explanations that we gain predictive power
no knowledge is derived from the past; that’s an illusion b/c we’re already using per-existing explanations as foundations
new knowledge can be created to explain things about the past we didn’t understand, but that’s new knowledge in the same way the original explanation was once new knowledge
e.g. axial tilt theory of seasons; no amount of past experience helped understand what’s *really* happening, someone had to make a conjecture in terms of geometry (and maybe Newtonian physics too)
when we have two explanations for a single phenomena they’re either the same, both wrong, or one is “right”
“right” is different from “true”—this is where fallibilism comes in (note: I don’t think you can talk about CR without talking about fallibilism; broadly they’re synonyms)
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense and we’ll discover more and more better explanations about the universe to explain it
this includes ~everything: anything we want to understand requires an explanation: quantum physics, knowledge creation, computer sciences, AGI, how minds work (which is actually the same general problem as AGI) - including human minds, economics, why people choose particular ice-cream flavors
DD suggests in *the beginning of infinity* that we should rename scientific theories scientific “misconceptions” because that’s more accurate
anyone can be mistaken on anything
there are rational ways to choose *exactly one* explanation (or zero if none hold up)
if we have a reason that some explanation is false, then there is no amount of “support” which makes it less likely to be false. (this is what is meant by ‘criticism’). no objectively true thing has an objectively true reason that it’s false.
so we should believe only those things for which there are no unanswered criticisms
this is why some CR ppl are insistent on finishing and concluding discussions—if two people disagree then one must have knowledge of why the other is wrong, or they’re both wrong (or both don’t know enough, etc)
to refuse to finish a discussion is either denying the counterparty the opportunity to correct an error (which was evidently important enough to start the discussion about) - this is anti-knowledge and irrational, *or* it’s to deny that you have an error (or that the error can be corrected) which is also anti-knowledge and irrational.
there are maybe things to discuss about practicality but even if there are good reasons to drop conversations for practical purposes sometimes, it doesn’t explain why it happens so much.
that was less focused on differences/incompatibilities than I had in mind originally but hopefully it gives you some ideas.
Is the bayesian method… trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that.
Unless it’s maths/decision theory related, that’s right. CR/Fallibilism is more about reasoning; like an internal-contradiction means an idea is wrong; there’s 0 probability it’s correct. Maybe someone alters the idea so it doesn’t have a contradiction which means it needs to be judged again.
His understanding of AGI is utterly anthropomorphic
I don’t think that’s the case. I think his understanding/theories of AGI don’t have anything to do with humans (besides that we’d create one—excluding aliens showing up or whatever). There’s a separate explanation for why AGI isn’t going to arise randomly e.g. out of a genetic ML algorithm.
If that argument doesn’t make sense to you, well that might mean that we’ve just identified something that bayesian/decision theoretic reasoning can do, that can’t be done without it.
Well, we don’t agree about fish, but whether it makes sense or not depends on your meaning. If you mean that I understand your reasoning, I think I do. If you mean that I think the reasoning is okay, maybe from your principles but I don’t think it’s *right*. Like I think there are issues with it such that the explanation and conclusion shouldn’t be used.
ps: I realize that’s a lot of text to dump all at once, sorry about that. Maybe it’s a good idea to focus on one thing?
[...] critrats [...] let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem
As someone who thinks you’d think they’re a ‘critrat’, this feels wrong to me. I can’t speak for other CR ppl, ofc, and some CR ppl aren’t good at it (like any epistemology), but for me I don’t think what you describe would add up to “a productive intellectual ecosystem”.
I think it’s fairly clear from this that he doesn’t have solomonoff induction internalized, he doesn’t know how many of his objection to bayesian metaphysics it answers.
I suspect, for DD, it’s not about *how many* but *all*. If I come up with 10 reasons Bayesianism is wrong (so 10 criticisms), and 9 of those get answered adequately, the 1 that’s still left is as bad as the 10; *any* unanswered criticism is a reason not to believe an idea. So to convince DD (or any decent Popperian) that an idea is wrong can’t rely on incomplete rebuttals, an idea needs to be *uncriticised* (answered criticisms don’t count here, though those answers could be criticised; that entire chain can be long and all of it needs to be resolved). There are also ideas answering questions like “what happens when you get to an ‘I don’t know’ point?” or “what happens with two competing ideas, both of which are uncriticised?”
Clarifying point: some ideas (like MWI, string theory, etc) are very difficult to criticise by showing a contradiction with evidence, but the fact 2 competing ideas exist means they’re either compatible in a way we don’t realise or they offer some criticisms of each other, even if we can’t easily judge the quality of those criticisms at the time.
Note: I’m not a Bayesian; DD’s book *The Beginning of Infinity* convinced me that Popper’s foundation for epistemology (including the ideas built on top of it / improved it) was better in a decisive way.
Evidence to the contrary, please?
here
Before October 2014, copyright law permitted use of a work for the purpose of criticism and review, but it did not allow quotation for other more general purposes. Now, however, the law allows the use of quotation more broadly. So, there are two exceptions to be aware of, one specifically for criticism and review and a more general exception for quotation. Both exceptions apply to all types of copyright material, such as books, music, films, etc.
https://www.copyrightuser.org/understand/exceptions/quotation/ - first link on google. there are more details about conditions there, and particularly what you’d have to show in order to prove infringement. Good luck ¯\_(ツ)_/¯
Quoting is a copyright violation in every jurisdiction I know of, if it’s done en masse.
“en masse” is vague.
Wow, you know about a lot of different legal frameworks. How does copyright violation work in Tuvalu and Mauritius? I’ve always wondered.
-- general comments --
It’s trivial to see that your idea of quoting is incomplete because most instances of quoting you see aren’t copyright violations (like news, youtube commentary, academic papers, whatever).
However, you obviously care about copyright violations deeply, so I suggest you get in touch with google too; they are worse offenders.
Since you care about *COPYRIGHT INFRINGEMENT* and not *BEING CRITICISED* surely this blatant infringement of your copyright is a much larger priority. The probability of someone seeing material which is infringing your copyright is orders of magnitude larger on google than on a small random website.
---
Edit/update/mini-post-mortem: I made this post because of an emotional reaction to the post above it by @gjm, which I shouldn’t have done. Some points were fine, but I was sarcastic (“Wow, you …”) and treated @gjm’s ideas unfairly, e.g. by using language like “trivial” to make his ideas sound less reasonable than they might be (TBH IANAL so really it’s dishonest of me to act with such certainty). Those statements were socially calibrated (to some degree) to try and either upset/annoy gjm or impact stuff around social status. Since I’d woken up recently (like less than 30min before posting) and was emotional I should have known better than to post those bits (maybe I should have avoided posting at all). There’s also the last paragraph, “Since you care about …” part, which at best is an uncharitable interpretation and at worst is putting words in gjm’s mouth (which isn’t okay).
For those reasons I’d like to apologies to gjm for those parts. I feel it’d be dishonest to remove them so I’m adding this update instead.
Yeah, almost everyone who we ban who has any real content on the site is warned. It didn’t feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.
I think you’re denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)
curi evidently wanted to change some things about his behaviour, otherwise he wouldn’t have updated his commenting policy. How do you know he wouldn’t have updated it more if you’d warned him? That’s exactly the type of criticism we (CR/FI) think is useful.
That sort of update is exactly the type of thing that would be reasonable to expect next time he came back (considering that he was away for 2 weeks when the ban was announced). He didn’t want to be banned, and he didn’t want to have shitty discussions, either. (I don’t know those things for certain, but I have high confidence.)
What probability would you assign to him continuing just as before if you said something like “If you keep continuing what you’re doing, I will ban you. It’s for these reasons.” Ideally, you could add “Here they are in the rules/faq/whatever”.
Practically, the chance of him changing is lower now because there isn’t any point if he’s never given any chances. So in some ways you were exactly right to think there’s low probability of him changing, it’s just that it was due to your actions. Actions which don’t need to be permanent, might I add.
The above post explicitely says that the ban isn’t a personal judgement of curi. It’s rather a question of whether it’s good or not to have curi around on LessWrong and that’s where LW standards matter.
Isn’t it even worse then b/c no action was necessary?
But more to the point, isn’t the determination X person is not good to have around a personal judgement? It doesn’t apply to everyone else.
I think what habryka meant was that he wasn’t making a personal judgement.
Arguably, if there is something truly wrong with the list, I should have an issue with it.
This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determent by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.
I think this is fair, and additionally I maybe shouldn’t have used the word “truly”; it’s a very laden word. I do think that, on the balance of probabilities, my case does reduce the likelihood of something being foundationally wrong with it, though. (Note: I’ve said this in, what I think, is a LW friendly way. I’d say it differently on FI.)
One thing I do think, though, is that people’s social anxiety does not make things in general right or wrong, but can be decisive wrt thinking about a single action.
Another thing to point out is anonymous participation in FI is okay, it’s reasonably easy to use an anonymous/pseudonymous email to start with. curi’s blog/forum hybrid also allows for anonymous posting. FI is very pro-free-speech.
Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.
I think that’s okay, curi isn’t trying to attract everyone as an audience, and FI isn’t designed to be a forum which makes people feel comfortable, as such. It has different goals from e.g. LW or a philosophy subreddit.
I think we’d agree that norms at FI aren’t typical and aren’t for everyone. It’s a place where anyone can post, but that doesn’t mean that everyone should, sorta thing.
If it’s possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can’t freely decide to use it to make a better decision than the one you would have made anyway.
[...]
Deterministic physics excluded free choice. Physics doesn’t.
MWI is deterministic over the multiverse, not per-universe.
A combination where both are fine or equally predicted fails to be a hypothesis.
Why? If I have two independent actions—flipping a coin and rolling a 6-sided die (d6) - am I not able to combine “the coin lands heads 50% of the time” and “the die lands even (i.e. 2, 4, or 6) 50% of the time”?
If you have partial predictions of X1XX0X and XX11XX you can “or” them into X1110X.
This is (very close to) a binary “or”, I roughly agree with you.
But if you try to combine 01000 and 00010 the result will not be 01010 but something like 0X0X0.
This is sort of like a binary “and”. Have the rules changed? And what are they now?
I wanted to reply to this because I don’t think it’s right to judge curi the way you have. Periergo I don’t have an issue w/. (it’s a sockpuppet acct anyway)
I think your decision should not go unquestioned/uncriticized, which is why I’m posting. I also think you should reconsider curi’s ban under a sort of appeals process.
Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.
You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI’s standards. I think this is problematic.
I’d like to note I am on that list. (like 1⁄2 way down) I am also a public figure in Australia, having founded a federal political party based on epistemic principles with nearly 9k members. I am okay with being on that list. Arguably, if there is something truly wrong with the list, I should have an issue with it. I knew about being on that list earlier this year, before I returned to FI. Being on the list was not a factor in my decision.
There is nothing immoral or malicious about curi.us/2215. I can understand why you would find it distasteful, but that’s not a decisive reason to ban someone or condemn their actions.
A few hours ago, curi and I discussed elements about the ban and curi.us/2215 on his stream. I recommend watching a few minutes starting at 5:50 and at 19:00, for transparency you might also be interested in 23:40 → 24:00. (you can watch on 2x speed, should be fine)
Particularly, I discuss my presence on curi.us/2215 at 5:50
You say:
There are 33 by my count (including me). The list spans a decade, and is there for a particular purpose, and it is not to publicly shame people in to returning, or to be mean for the sake of it. I’d like to point out some quotes from the first paragraph of curi.us/2215:
Notably, you don’t end up on the list if you are active. Also, although it’s not explicitly mentioned in the top paragraph; a crucial thing is that those on the list have left and avoided discussion about it. Discussion is much more important in FI than most philosophy forums—it’s how we learn from each other, make sure we understand, offer criticism and assist with error correction. You’re not under any obligation to discuss something, but if you have criticisms and refuse to share them: you’re preventing error correction; and if you leave to evade criticism then you’re not living by your values and philosophy.
The people listed on curi.us/2215 have participated in a public philosophy forum for which there are established norms that are not typical and are different from LW. FI views the act of truth-seeking differently. While our (LW/FI) schools of thought disagree on epistemology, both schools have norms that are related to their epistemic ideas. Ours look different.
It is unfair to punish someone for an act done outside of your jurisdiction under different established norms. If curi were putting LW people on his list, or publishing off-topic stuff at LW, sure, take moderation action. None of those things happened. In fact, the main reason you’ve provided for even knowing about that list is via the sockpuppet you banned.
Sockpuppet accounts are not used to make the lives of their victims easier. By banning curi along with Periergo you have facilitated a (minor) victory for Periergo. This is not right.
THIS IS A SERIOUS ALLEGATION! PLEASE PROVIDE QUOTES
curi prefers to discuss in public so they should be easy to find and verify. I have never known curi to threaten people. He may criticise them, but he does not threaten them.
Notably, curi has consistently and loudly opposed violence and the initiation of force, if people ask him to leave them alone (provided they haven’t e.g. committed a crime against him), he respects that.
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.
“a history of threats against people who engage with him” has not been established or substantiated.
I believe he is. As far as I can tell he’s gone to great personal expense and trouble to keep FI alive for no other reason than that his sense of morality demands it. (That might be over simplifying things, but I think the essence is the same. I think he believes it is the right thing to do, and it is a necessary thing to do)
He has gained karma since returning to LW briefly. I think you should retract the part about him having negative karma b/c it misrepresents the situation. He could have made a new account and he would have positive karma now. That means your judgement is based on past behaviour that was already punished.
This is double jeopardy.(Edit: after some discussion on FI it looks like this isn’t double jeopardy, just double punishment. Double jeopardy specifically refers to being on trial for the same offense twice, not being punished twice.)Moreover, curi is being punished for being honest and transparent. If he had registered a new account and hidden his identity, would you have banned him only based on his actions this past 1-2 months? If you can say yes, then fine, but I don’t think your argument holds in this case the only part that is verifiable is based on your disapproval of his discussion methods. Disagreeing with him is fine. I think a proportionate response would be a warning.
As it stands no warning was given, and no attempt to learn his plans was made. I think doing that would be proportionate and appropriate. A ban is not.
It is significant that curi is not able to discuss this ban himself. I am voluntarily doing this, of my own accord. He was not able to defend himself or provide explanation.
This is especially problematic as you specifically say you think he was improving compared with his conduct several years ago.
This alone is not enough. A warning is proportionate.
Unpopularity is no reason for a ban
How is this different to pre-crime?
I think, given he had deliberately changed his modus operandi weeks ago and has not posted in 13 days, this is unfair and overly judgmental.
You go on to say:
What could curi have done differently which would have tipped the scales? If there is no acceptable thing he could have done, why was action not taken weeks ago when he was active?
I believe it is fundamentally unjust to delay action in this fashion without talking with him first. curi has an incredibly long track record of discussion, he is very open to it. He is not someone who avoids taking responsibility for things; quite the opposite. If you had engaged him, I am confident he would have discussed things with you.
It makes sense that you want to cultivate the best rational forums you can. I think that is a good goal. However, again, there were other, less extreme and more proportionate actions that could have been taken first, especially seeing as curi had changed his LW discussion policy and was inactive at the time of the ban.
We presumably disagree on the meaning of ‘high standards’, but I don’t think that’s particularly relevant here.
There were many alternative actions you could have taken. For example, a 1-month ban. Restricting curi to only posting on his own shortform. Warning him of the circumstances and consequences under conditions, etc.
I’m glad you’ve mentioned this, but LW is not a court of law and you are not bound to those standards (and no punishment here is comparable to the punishment a court might distribute). I think there are other good reasons for reconsidering curi’s ban.
I think there is a critical point to be made here: you could have taken no action at this time and put a mod-notification for activity on his account. If he were to return and do something you deemed unacceptable, you could swiftly warn him. If he did it again, then a short-term ban. Instead, this is a sledge-sized banhammer used when other options were available. It is a decision that is now publicly on LW and indicates that LW is possibly intolerant of things other than irrationality. I don’t think this is reflective of LW, and I think it reflects poorly on the moderation policies here. I don’t think it needs to be that way, though.
I think a conditional unbanning (i.e. 1 warning, with the next action being a swift short ban) is an appropriate action for the moderation team to make, and I implore you to reconsider your decision.
If you think this is not appropriate, then I request you explain why 2 years is an appropriate length of time, and why Periergo and curi should have identical ban lengths.
The alternative to pacificity does not need to be so heavy handed.
I’d also like to note that curi has published a post on his blog regarding this ban; I read it after drafting this reply: http://curi.us/2381-less-wrong-banned-me