Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.
Team Piepgrass: “Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I’m wrong.”
Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.
Team Piepgrass: “Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I’m wrong.”
The issue here is not to be addressed by exegesis of Korzybski.
If the issue is what I thinks, what could be better?
I am not clear what the issue even is,
Whether the categories are made by Man or the World.
In traditional philosophy, there’s a three way distinction between nominalism , conceptualism and realism. Those are (at least) three different theories intended to explain three sets of issues: the existence of similarities, differences and kinds in the world, the territory; the way concept formation does and should work in humans; and issues to done with truth and meaning, relating the map and territory.
But conceptualism comes in two varieties. So Gaul is divided into four parts.
One the one hand, there is the theory that correct concepts “carve nature at the joints” or “identify clusters in thingspace”, the theory Aristotle and Ayn Rand. On the other hand is the “cookie cutter” theory, the idea that the categories are made by (and for) man, Kant’s “Copernican revolution”.
In the first approach, the world/territory is the determining factor, and the mind/map can do no better than reflect it accurately. In the second approach, the mind makes its own contribution.
Which is not to say that it’s all map, or that the mind is entirely in the driving seat. The idea that there is no territory implies solipsism (other people only exist in the territory, which doesn’t exist) and magic (changing the map changes the territory, or at least, future observations). Even if concepts are human constructions, the territory still has a role, which is determining the truth and validity of concepts. Even if the “horse” concept” is a human construct, it is more real than the “unicorn” concept, because horses can be observed. In cookie cutter terms, the territory supplies the dough, the map supplies the shape.
So Kantianism isn’t a completely idealistic or all-in-the-map philosophy...in Kant’s own terminology it’s empirical realism as well as transcendental idealism. I’s not as idealistic as Hegel’s system, for instance. Similarly, Aristoteleanism isn’t as realistic as Platonism—Plato holds that there aren’t just mind-independent concepts, but they dwell in their own independent realm.
So, although the conceptualisms are different from each other, they are both somewhere in the middle
Liron:
I mean, Bayes and Popper, they’re not like night and day, right?
There are some stark differences.
#Popperian claim that positive justification is impossible.
#Induction doesn’t exist (or at least , matter in science)
#Popper was prepared to consider the existence of Propensities objective.probabilities, whereas Bayesians, particularly those who follow Jaynes believe in determinism and subjective probability.
#Popperian refutation is all or nothing, whereas Bayesian negative information is gradual.
#In Popperism, there can be more than one front running or most favoured theory, even after the falsifiable ones have been falsified, since there aren’t quantifiable degrees of confirmation.
*Explanation
For Popper and Deutsch, theories need to be explanatory, not just predictive. Bayesian confirmation and disconfirmation only target prediction directly—if they are achieving explanation or ontological correspondence , that would be the result of a convenient coincidence.
#Conjectures.
For Popperians, the construction of good theoretical conjectures is as important as testing them. Bayesian seem quite uninterested in where hypotheses come from.
#Simplicity
For Deutschians, being hard-to-vary is the preferred principle of parsimony. For Yudkowskians, it’s computation complexity.
#Error correction
For Popperians us something you actually do.
Popperians like to put forward hypotheses that are easy to refute. Bayesians approve theoretically of “updating”, but dislike objections and criticisms in practice.
#(Long term) prediction is basically impossible .
More Deutsch than Popper—DD believed that the growth and unpredictability of knowledge . The creation of knowledge is so unpredictable and radical that long term predictions cannot be made. Often summarised to “prediction is impossible”. Of course , Bayesians are all about prediction—but the predictive power of Ates tends only to be demonstrated in you models, where the ontology isn’t changing under your feet. Their AI I predictions are explicitly intuition based.
#Optimism versus Doom.
Deutsch is highly optimistic that continuing knowledge creation will change the world for the better (a kind of moral realism is a component of this). Yudkowsky thinks advanced AI is our last invention and will kill us all.
Which is that Popperianism bottoms out onto common sense.
Falsification and fallbiilism are quite intuitive to scientists … on the other hand, both ideas took some time to arrive...they weren’t obvious to Aristotle or Bacon.
The non existence of induction is not common sense.
So I, I don’t really have such a thing as changing my mind because the state of my mind is always, it’s a playing field of different hypotheses, right? I always have a group of hypotheses and there’s never one that it’s like, oh this is my mind on this one. Every time I make a prediction, I actually have all the different hypotheses weigh in, weighted by their probability, and thy all make the prediction together.”
What’s the difference? Is updating is spectral, changing your mind is binary?
I mean, Solomonov induction does grow its knowledge and grow its predictive confidence, right
It starts off with omniscience, in the sense of possible hypotheses and then gets whittled down.
one of the good Bayesian critiques about frequentism that I like. So I, we, I totally agree with you. That, that, that the world is deterministic, non stochastic, and randomness doesn’t actually occur in nature. I, I agree.
Determinism is not a fact.
Liron Shapira: we, or we might we, we, there’s, there’s just no epistemic value to treating the universe as ontologically fundamentally, non deterministic, and the strongest example I’ve seen of that is in quantum theory, like the idea that a quantum collapses. ontologically fundamental to
There’s always epistemological value in believing the truth. If the universe is not deterministic,a rationalist should want to believe so.
What I’m saying is probability is not the best tool to reason about the future precisely because the future is chaotic and unpredictable, right?
Depends on whether it’s near or far.
philosophy is interesting, gives us some useful terminology, and makes it clearer why people may disagree about morality — but at the end of the day, it doesn’t actually answer any moral questions for us
That doesnt mean something else works.
it’s just stamp-collecting alternative answers,
It’s something a bit better than that , and a lot worse than finding the One true Answer instantly
and the best one can do is agree to disagree. Philosophers do talk about moral intuition,
Reliance on intuition exists because there’s no other way of pinning down what is normatively right and wrong. Good and evil are not empirically observable properties. (There’s a kind of logical rather than empirical approach to moral realism).
but they are very aware that different philosophers often interpret this differently, and even disagree on whether it could have any actual truth content.
Yep. Again , that doesn’t mean there’s a simple short cut.
However, there is another rational approach to morality: evolutionary ethics, a subfield of evolutionary psychology.
That’s philosophy as well as science:
“Evolutionary ethics tries to bridge the gap between philosophy and the natural sciences”—IEP.
This describes why, on evolutionary grounds, one would expect species of social animals to evolve certain moral heuristics, such as a sense of fairness about interactions between members of the same group, or an incest taboo, or friendship. So it gives us a rational way to discuss why humans have moral intuitions, and even to predict what those are likely to be.
But not a way to tell if any of that is a really true …a way that solves normative ethics, not just descriptive ethics. In philosophy ,it’s called the “open question” argument.
Descriptive ethics is the easy problem. But if you want to figure out what is actually ethical, not just what humans have ethical style behaviour, you need to solve normative ethics, and if you want to solve normative ethics, you need to know what truth is, so you need philosophy. Of course, the idea that there is some truth to ethics beyond what humans believe is moral realism … and moral realism is philosophy, and arguing against it is engaging with philosophy. So “evolutionary ethics is ethics” is a partly philosophical claim.
This gives us a rational, scientifically-derived answer to some moral questions
Yes, some.
if we predict that humans will have evolved moral intuitions that give a clear and consistent answer to a moral question across cultures (and that answer isn’t a maladaptive error), then it actually has an answer (for humans). For example, having sex with your opposite-sex sibling actually is wrong, because it causes inbreeding which surfaces deleterious recessives. It’s maladaptive behavior, for everyone involved.
According to standard evolutionary ethics, killing all the men.and impregnating all the women in a neighbouring tribe is morally right.. .Also , Genghis Khan was the most virtuous of men.
So I want something well-designed for its purpose, and that won’t lead to outcomes that offend the instinctive moral and aesthetic sensibilities that natural selection has seen fit to endow me with, as a member of a social species (things like a sense of fairness, and a discomfort with bloodshed). (
A brief glance at human history shows that those things are far from universal. Fairness about all three of race, social status and gender barely goes back a century. It’s possible that Fairness emerges from something like game theory..but, again, thats a somewhat different theory to pure EE.
That’s basically a different theory. It’s utilitarianism with evolutionary fitness plugged in as the utility function.
Evolutionary ethics provides a neat answer to what philosophers call the “ought-from-is” problem: given a world model that can describe a near-infinite number of possible outcomes/states, how does there arise a moral preference ordering on those outcomes?
That’s not the actual the right problem. Naturalized ethics needs to be able derive true Ought statements from true Is statements. Arbitrary preference ordering simplifies the problem by losing the requirement for truth.
In order to decide “what we ought to value”, you need to create a preference ordering on moral systems, to show that one is better than another. You can’t use a moral system to do that — any moral system (that isn’t actually internally inconsistent) automatically prefers itself to all other moral systems,
Not necesarilly, because moral systems can be judged by rational norms, ontology, etc. (You yourself are probably rejecting traditional moral realism ontologically, on the grounds that it requires non-natural, “queer” objects).
Or in utilitarian terminology, where does the utility function come from? That’s obviously the key question for value learning: we need a theoretical framework that gives us priors on what human values are likely to be,
Human value or evolutionary value? De facto human values don’t have be the same as evolutionary values … we can value celibate saints and deprecate Genghis Khan.
The utilitarian version of the evolutionary ethics smuggles in an assumption of universalism that doesn’t belong to evolutionary ethics per se.
.and predicts their region of validity. Evolutionary fitness provides a clear, quantifiable, preference ordering (per organism, or at least per gene allele),
Theres a difference between the theories that moral value is Sharing My Genes; that it’s Being Well Adapted; that it’s Being in a Contractual Arrangement with Me;and that it’s something I assign at will.
any evolved intelligence will tend to evolve a preference ordering mechanism which is an attempt to model that, as accurately as evolution was able to achieve, and a social evolved intelligence will evolve and develop a group consensus combining and (partially) resolving the preference orders of individual group members into a socially-agreed partial preference ordering.
Evolutionary ethics doesn’t predict that you will care about non-relatives, and socially constructed ethics doesn’t predict you will care about non group members. Or that you won’t. You can still include them gratuitously.
If you cause a living being pain, then generally you are injuring them in a way that decreases their survival-and-reproductive chances (hot peppers have evolved a defense mechanism that’s an exception: they chemically stimulate pain nerves directly, without actually injuring tissue). But AIs are not alive, and not evolved. They inherently don’t have any evolutionary fitness — the concept is a category error. Darwinian evolution simply doesn’t apply to them.
OK, but the basic version of evolutionary ethics means you shouldn’t care about anything that doesn’t share your genes.
So if you train an AI off our behavior to emulate what we do when we are experiencing pain or pleasure, that has no more reality to it than a movie or an animatronic portrayal of pain or pleasure
Would you still say that if it could be proven that an AI had qualia?
Since it provides a solution to the ought-from-is problem, it also gives us an error theory on human moral intuition; we can identify cases where that’s failing to be correlated with actual evolutionary fitness and misleading us. For example, a sweet tooth is maladaptive when junk food is easily available, since it then leads to obesity and diabetes.
Liberal, Universalist ethics is maladaptive, too, according to old school EE.
Evolutionary ethics similarly provides a clear answer to “what are the criteria for moral patienthood?” — since morality comes into existence via evolution, as a shared cultural compromise agreement reconciling the evolutionary fitness of different tribe-members, if evolution doesn’t apply to something, it doesn’t have evolutionary fitness and thus it cannot be a moral patient.
If there is a layer of social construction on top of evolutionary ethics, you can include anyone or anything.as a moral patent. If not, you are back to caring only about those who share your genes
.and predicts their region of validity. Evolutionary fitness provides a clear, quantifiable, preference ordering (per organism, or at least per gene allele),
Evolutionary ethics doesn’t predict that you will care about non-relatives, and socially constructed ethics doesn’t predict you will care about non group members. Or that you won’t
If you cause a living being pain, then generally you are injuring them in a way that decreases their survival-and-reproductive chances (hot peppers have evolved a defense mechanism that’s an exception: they chemically stimulate pain nerves directly, without actually injuring tissue). But AIs are not alive, and not evolved. They inherently don’t have any evolutionary fitness — the concept is a category error. Darwinian evolution simply doesn’t apply to them.
Even if they have qualia?
Since it provides a solution to the ought-from-is problem, it also gives us an error theory on human moral intuition; we can identify cases where that’s failing to be correlated with actual evolutionary fitness and misleading us. For example, a sweet tooth is maladaptive when junk food is easily available, since it then leads to obesity and diabetes.
Liberal, Universalist ethics is maladaptive, too, according to old school EE.
Now, human moral instincts may often tend to want to treat cute dolls as moral patients (because those trigger our childrearing instincts); but that’s clearly a mistake: they’re not actually children, even though they look cute.
Non human animals can be treated as moral patiens and it’s not necessarily a mistake, since they can be ” part of the family”.
But Korzybski was very much aware of relativity theory, which awakened the scientific world from Kant’s dogmatic slumber concerning our ideas of space and time.
The general Kantian approach doesn’t stand or fall by his specific claims about space and time.
WWES? What Would Eliezer Say?
Why should I care? Is he an expert on Korzybski?
As I said in the OP, I regard The Sequences as a worthier successor to K’s magnum opus than any revision of the latter could be.
AFAICS, the issue is still being debated in the rationalshphere, eg Scott versus Zach, so it wasn’t settled in the sequences.
>Every time a thought arises, you “see its true nature”—in my terms, you see that the thought (and its valence) arose from complex idiosyncratic antecedents within the brain algorithm, without any “vitalistic force
There is no evidence of brain algorithms, including meditational evidence. No one sees them in meditation. They are an assumption.
They are on the diagram.
So are the categories derived (as per Aristotle and Rand) or imposed (as per Kant and Alexander)?
Adequate AI agents exist, so the problem is soluble at a good enough-level. What is lacking is presumably a perfect solution
Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
He seems to believe that, but I dont see why anyone else should. Its like saying English is the only language, or the Earth is the only planet. If morality is having values, any number of entities could have values. If it’s rules for living in groups, ditto. If it’s fairness, ditto.
This is all strictly phrased in computational terms too
It’s not strictly phrased at all..It’s very hard to follow what he’s saying...or particularly computational.
You’re not ultimately limited to utilitarianism: you can use Kantian or Rawlsian arguments to include redheads.
The situation is more complex and less bad than you are making it out to be. For instance the word “qualia” is an attempt to clarify te word “consciousness”, and does have a stipulated meaning, for alm that some people ignore it.. The contention about words like “qualia” and “valence” is about whether and how they are real, and that is not a semantic issue. Rationalists have a long term problem of trying to find objective valance in a physical universe, even though Hume’s fork tells you it’s not possible
If the success of a moral theory ultimately grounds out in intuition, it’s OK to use unintuitiveness to summarily reject a theory.
CEV is group level relativism, not objectivism.
HPEs exists neither in our past light-cone nor our future light-cone; rather, HPEs exist eternally, outside time.
Possibility 4 is a popular theistic metaphysical take, but raises questions of the significance of an eternal higher power. If the higher power exists in a Tegmark IV multiverse way, it’s unclear how it could have effects on our own universe.
A film editor has complete control over the “time” of the movie.
It gives you the correct probabilities for your future observations, as long as you normalize whatever you have observed to one. The difference from Copenhagen is that in Copenhagen there is a singular past which actually is measure 1.0.
Now what’s difficult is figuring out the role of measure in branches which have fully decohered, so that they can no longer observe each other. Wether an “Everett branch” is such a branch is unknown .
Once again, no ontology is actually implied. It’s absolutely trivial to describe behavior of indetermenistic processes in terms of probability experiment. I’m concentrating on deterministic cases simply because they are trickier
If that’s what you actually think, the first line should read something like “under circumstances where probability is in the mind”.
You keep missing the point
The point is that a map has to represent the territory.
“And sure, every map is, in a sense, a map of the world”, as you out it.,
So if a the territory is branching , the map.should, too. (A map may include aspects of human knowledge as well).
I’m proposing a better map, capable to talk about knowledge states and uncertainty, in any circumstances
That’s a disadvantage, because the same map can’t represent any territory.
Threre may be an ontologically neutral way of doing probability calculations, but it’s not a map, for that reason....more of a tool.
If you think that the framework of probability experiment that I’m outlining in the post fails to account for something that the frameworks of possible worlds manage to account for
The problem is the implied ontology. You haven’t actually proven that probability is only in the mind and you can’t prove it using methodology, because its a statement about the territory , not just about probability calculations.
Possible world is a term from a map. There may be a referent for it in a territory
If there is a referent for it in the territory, it is entirely reasonable to say “possible worlds exist”.
But it doesn’t mean that we have to use this particular term to talk about this referent. We may have a better term, instead.
Is it really a win to admit the substance of existing possible worlds, but under a different name?
even if we grant that the universe is utterly deterministic and therefore probability is fully in the map, this map *still *has to correspond to the territory for which you have to go an look
The map that corresponds to a deterministically branching multiversal has possible worlds. The map that corresponds to a Copenhagen universe has inherent indeterminism
What I’m saying is that even the talk itself about “possible worlds”—without assumption of their realism—is harmful as this framework leaves us unable to reason about logical uncertainty
Refusing to.ever talk about possible worlds is dangerous, because they might exist (they do in MWI ) and they might be useful otherwise. What you really have is an argument that they are a poor match for logical uncertainty, which they are, but you are allowed to use different tools for different jobs.
Having dogmatic , non-updatable assumptions is bad (see rationality, passsim) and it’s still bad when they are in the direction of determinism, reductionism, etc.
The Born probabilities are in the mind under the MWI! Reality just has the amplitudes
Which are pretty similar. They are objective feature of the territory that tell you how likely you are to see things.
And note that MWI features really existing possible worlds, in that it features a a multiplicity of existing actual worlds...and what is actual is possible. What MWI removes is chance.
And note that the argument from MWI doesn’t support “probability is in the Mind” as it is usually stated, because it is usually stated as something that is true unconditionally, and MWI is only one possibility.
A core tenet of Bayesianism is that probability is in the mind
That argument never had anything to do with Bayesianism as known to the Rev. Bayes...it’s much more to do with Jaynes and Yudkowsky.
Also, it was never valid...it was pointed out a long time ago that (a form of) probability being in the mind doesn’t imply (a form of) it isn’t in the territory as well.
Armchair arguments can’t prove anything about the territory...you have to look.
The people whose job it is to investigate this sort of thing, physicists , have been unable to decide the issue.
The specific reason for believing in in-the territory randomness is :
Bell’s theorem—Wikipedia https://en.m.wikipedia.org/wiki/Bell’s_theorem
If..it was pointed out a long time ago that (a form of) probability being in the mind doesn’t imply (a firm of) it isn’t in the territory as well.
Armchair arguments can’t prove anything about the territory...you have to look.
The people whose job it is to investigate this sort of thing, physicists , have been unable to decide the issue.
By the same logic tossing a coin is also deterministic, because if we toss the same coin exactly the same way in exactly the same conditions, the outcome is always the same.
That not true because fundamental determinism is true , or because effective determinism at the macroscopic level is true.
But beyond that, it allows us to get rid of these “possible worlds” which were leading everyone astray. Now instead of speculating about some weird metaphysics that we have no idea about, we explicitly approximate some process in the real world
You may be beating a dead horse there. Talk of possible worlds doesn’t have to imply realism about possible worlds, just as mathematical anti-realists can talk about numbers without committing to their mind independent existence.
“In philosophy, possible worlds are usually regarded as real but abstract possibilities (i.e., Platonism),[4] or sometimes as a mere metaphor, abbreviation, or as mathematical devices, or a mere combination of propositions”—WP.
Tl;Dr: Talk of probabilities and possible worlds doesn’t have to be talk about the territory. But it can be.
In an infinite universe, there are infinitely many copies of you (infinitely many of which are Boltzmann brains
That might be true if “you” are a snapshot , or observer moment. Long lasting Boltzman brains are vanishingly unlikely, OTOH. Time in general is a problem for multiversal theories.
the least complex description of your conscious experience is the description of an external lawful universe and directions for finding the substructure embodying your experience within that substructure.
Why isn’t it solipsism? Why is a large universe plus a long “address” simpler than a small universe plus a short address?
A quantum mechanical state can be described as a linear combination of “classical” configurations
It doesn’t have to be, though.
The fact that we are described by algorithm A rather than B is no more or less mysterious than the fact that the laws of physics are like so instead of some other way.
Then you are not actually deriving the Born rule from UDASSA.
Iraq had and used chemical weapons in the eighties.
https://en.m.wikipedia.org/wiki/Iraqi_chemical_weapons_program