We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale
TheAncientGeek
Possibly related:
The other day it was raining heavily. I chose to take an umbrella rather than shaking my fist at the sky. Shaking your fist at the sky seems pretty stupid, but people do analogous things all the time.
Complaining to your ingroup about your outgroup isn’t going to change your outgroup. Complaining that you are misunderstood isn’t going to make you understood. Changing the way you communicate might. You are not in control of how people interpret you, but you are in control of what you say.
It might be unfortunate that people have a hair-trigger tendency to interpret others as saying something dastardly, but, like the rain, it is too large and diffuse a phenomenon to actually do anything about.
Thinking in terms of virtue (or blame), and thinking in terms of fixing things , are very different. It’s very tempting to sit down with your ingroup, and agree with them about the deplorability of the outgroup, who aren’t even party to the conversation...as if that was achieving something. You can tell it is an attractor, because rational people are susceptible to it, too
That observation runs headlong into the problem, rather than solving it.
Well, we don’t know if they work magically, because we don’t know that they work at all. They are just unavoidable.
It’s not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can’t do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by “intuition”. Philosophers talk about intution a lot because that is where arguments and trains of thought ground out...it is away of cutting to the chase. Most arguers and arguments are able to work out the consequences of basic intutitions correctly, so disagrements are likely to arise form differencs in basic intuitions themselves.
Philosophers therefore appeal to intuitions because they can’t see how to avoid them...whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using inutioins when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven’t seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.
Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not “there”. Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.
Does the use of empiricism shortcut the need for intuitions, in the sense of unfounded foundations?
For one thing, epistemology in general needs foundational assumptions as much as anything else. Which is to say that epistemogy needs epistemology as much as anything else. -- to judge the validity of one system of epistemology, you need another one. There is no way of judging an epistemology starting from zero, from a complete blank. Since epistemology is inescapable, and since every epistemology has its basic assumptions, there are basic assumptions involved in empiricism.
Empiricism specifically has the problem of needing an ontological foundation. Philosophy illustrates this point with sceptical scenarios about how you are being systematically deceived by an evil genie. Scientific thinkers have closely parallel scenarios in which humans cannot be sure whether you are not in the Matrix or some other virtual reality. Either way, these hypotheses illustrate the point that the empiricists are running on an assumption that if you can see something, it is there.
Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5
Plus 6: There is a preferred basis.
First, it’s important to keep in mind that if MWI is “untestable” relative to non-MWI, then non-MWI is also “untestable” relative to MWI. To use this as an argument against MWI,
I think it’s being used as an argument against beliefs paying rent.
MWI is testable insofar as QM itself is testable.
Since there is more than one interpretation of QM, empirically testing QM does not prove any one interpretation over the others. Whatever extra arguments are used to support a particular interpretation over the others are not going to be, and have not been, empirical.
But, importantly, collapse interpretations generally are empirically distinguishable from non-collapse interpretations.
No they are not, because of the meaning of the word “interpretation” but collapse theories, such as GRW, might be.
This is why there’s a lot of emphasis on hard-to-test (“philosophical”) questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions—because sometimes [..] the answer matters a lot for our decision-making,
Which is one of the ways in which beliefs that don’t pay rent do pay rent.
I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.
, a state is good when it engages our moral sensibilities s
Individually, or collectively?
We don’t encode locks, but we do encode morality.
Individually or collectively?
Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don’t need to simulate anything else to see it
The goodness-to-you or the objective goodness?
if you are going say that morality “is” human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relativism.
This, I suppose, is why some people think that Eliezer’s metaethics is just warmed-over relativism, despite his protestations.
It’s not clearly relativism and it’s not clearly not-relativism. Those of us who are confused by it. are confused because we expect a metaethical theory to say something on the subject.
The opposite of Relative is Absolute or Objective. It isn’t Intrinsic. You seem to be talking about something orthogonal to the absolute-relative axis.
No, we’re in a world where tourists generally don’t mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn’t be RVs. They wouldn’t usually have kitchens and their showers would have to be way nicer than typical RV showers.
And they could relocate overnight. That raises the possibility of self-driving sleeper cars for business travellers who need to be somewhere by morning.
That amounts to “I can make my theory work if I keep on adding epicycles”.
can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..
[3] Some mixture. Morality doesn’t have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
Seconded.
That assumes he had nothing to learn from college, and the only function it could have provided is signalling and social credibility.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
but I don’t necessarily understand what it would mean for a different kind of mind.
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
I am questioning the implicit premise that some kinds of emergent things are “reductively understandable in terms of the parts and their interactions.
It’s not so much some emergent things, for a uniform definiton of “emergent”, as all things that come under a vriant definition of “emergent”.
I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism
Not really, they are about what we would now call mereology. But as I noted, the two tend to get conflated here.
. I would illustrate this with Viliam’s example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all.
Reductionism is about preserving and operating within a physicalist world view, and physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality. Careful reducitonists say “reducible to its parts, their structure, and their interactions”.
There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.
Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.
Eg:-
(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).
I asked you before to propose a meaningless statement of your own.
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if “robot pain” is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
morality, which has many of the same problems as consciousness, and is even less defensible.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
Saying that some things are right and others wrong is pretty standard round here. I don’t think I’m breaking any rules. And I don’t think you avoid making plonking statements yourself.