# mwengler

Karma: 2,441
• I’d like to join how do I do that?

• I suppose you might be right for some people. For me, the fact that repeating infinite decimal expansions are rational is deeply deeply ingrained. Since your post is essentially how to square your feelings with what turns out to be mathematically true, you have a lot of room for disagreement as there is no contradiction in different people feeling different ways about the same facts.

For me the most fun thing about 0.9999.… is that 19 = .11111… and therefore 9x1/​9 = 9x.111111..… and this last expression obviously = .99999...

You should also do a search on “right” in your post and edit it, you use “right” one time where you really need “write” I think it is “right down” instead of “write down” but I’ll let you do the looking.

• The OP states:

A very good question is “what kinds of objects are these, anyway?” Since we have an infinite decimal they can’t be rational numbers.

This is just wrong. A rational number is a number that can be written as a fraction of two integers. Lots of infinite decimals are rational numbers. 13 = .3333333..., 19 = .1111111.… 17 = .142857142857142857… etc.

• Clearly we can differentiate between different-location-same-time and different-location-different-time. Two things in different-location-same-time are different things. Two things different-location-different-time may be same thing or may be different thing depending on the path through time. Your mathematical style of abstraction in thinking about identity will only be useful at explaining the real world if it is matched to real world processes, and does not ignore important real world insights.

• if we adopt idea that consciousness could be different without any physical difference between the copies, we adopt the idea of p-zombies and reject physicalism that is modern version of materialism. It almost the same as to say that immaterial soul exist. It is very strong statement.

Not relevant to the problem. If you create a copy of me, the copy is not identical, if for no other reason than it occupies a different location than I do. I agree that if it occupied the same location that I do, atom for atom and quark for quark, that could lead to the concern you express. But copies cannot occupy the same location, and so there is no problem having the copy to the left be one consciousness while the original to the right is a different consciousness.

The strongest claim I might accept would be that both the original and the copy have “valid” claims to be the continuation of the pre-copying single consciousness that was me back then. But no matter how you slice it, killing the original when you make the copy is still destroying a separate consciousness, even if the remaining consciousness thinks it is the only continuation of the pre-copy consciousness.

• 1) I do not understand why our experience of identical twins does not play into most discussions of my copy being “the same person as me.” We know that twins do not share the same consciousness (unless Occam’s razor is wrong and they are all lying.) We know from that that if we made a copy without destroying the original that the copy and the original would not share a consciousness. So why isn’t at least the possibiliy (I would estimate overwhelming likelihood) that a copy is a different consciousness than the original, and that destroying the original kills one consciousness while making a copy creates a different consciousness, and that these are separate processes?

2) Does philosophy talk somewhere about what I would call “outer” and “inner” worlds? I know I’m conscious because I participate in my inner world. I figure by Occam’s razor that you are conscious, but I don’t have direct experience of your consciousness, because I can only see you in my outer world. We don’t talk anywhere near as much about “inner” world because we don’t share that with others, while our “outer” experiences are shared, and we have evolved a host of techniques including language and science for processing “outer” experiences. But “inner” experiences don’t benefit from language and science because they are, so far, locked away inside us, not social phenomenon. Because I think the idea that our copy is a continuation of our own consciousness is a mistake we can make if we don’t realize there is an “inner” experience quite distinct from our “outer” experiences. So sure, my copy thinks he is continuously conscious, and therefore may think my consciousness has jumped into him, but that is because to my copy, I am part of his “outer” world. But if a non-destructive copy of me was made, I think it is obvious from what we know about twins that despite my copies eloquence at explaining his continuity from me, that I, the original, would resist being killed as superflous. Yes in everybody else’s outer world, where consciousness of others is indirectly inferred, they can’t tell that my copy is not a continuation of my consciousness in separate matter. But in my inner world, it seems pretty clear that I, (the original) would know.

• My first thoughts reading your post are 1) You start WAY TOO LATE IN THE GAME. You are essentially talking about altruism as a conscious choice which means you are well into the higher mammals.

Virtually every sexually reproducing creature devotes resources to reproduction that could have been conserved for individual survival. As you move up in complexity, you have animals feeding their young and performing other services for them. As would be expected with all evolved cooperation, the energy and cost you expend raising your young produces a more survivable young and so is net cost effective at getting the next generation going, which is pretty much what spreads genes.

How big of a leap is it from a mama bird regurgitating food into her baby’s mouth to you helping your neighbor hunt for wooly mammoth?

If you were the first organism to get the gene to feed your babies or do whatever expanded their survivability, then obviously that is how that gene propagates, your babies have the gene.

As you get to the more complex forms of altruism of primates and humans, you also get to strong feedback mechanisms against non-cooperators and free-riders. The system may not be perfect but I think it allows a path from feeding babies or burying eggs in the sand to modern altruism in humans where no wierd “how do we start this” behaviors bump up to stop things.

• I may not understand the question’s point, because as I read it the answer is a very obvious “Yes.” We determined Newton’s laws and Maxwell’s equations from observations of our world. So the planets in orbit around the sun, the moon around the earth, and an apple falling to the ground all lead to gravitation. The attraction between wires carrying current in the same direction (magnetic), the functioning of transformers (change in magnetic field produces electric field) and radio and light all fit together to give Maxwell’s equations.

So yes, a world with the macroscopic physical observations as ours does not violate Newton’s or Maxwell’s laws because our world with those observations doesn’t violate those laws. If Newton’s or Maxwell’s equations were different, the world you saw would necessarily be different.

What am I missing here?

• That Artificial Intelligence is going to do a lot of the same things that Natural Intelligence does.

• Taboo “faith”, what do you mean specifically by that term?

Good idea. I mean that EVERYBODY, rationalist atheist and christian alike, starts with an axiom or assumption.

In the case of rationalist atheists (or at least come such as myself) the axioms started with are things like 1) truth is inferred with semi=quantifiable confidence from evidence supporting hypotheses, 2) explanations like “god did it” or “alpha did it” or “a benevolent force of the universe did it” are disallowed. I think some people are willing to go circular, allow the axioms to remain implicit and then “prove” them along the way: I see no evidence for a conscious personality with supernatural powers. But I do claim that is circular, you can’t prove anything without knowing how you prove things and so you can’t prove how you prove things by applying how you prove things without being circular.

So for me, I support my rationalist atheist point of view by appealing to the great success it has in advancing engineering and science. By pointing to the richness of the connections to data, the “obvious” consistency of geology with a 4 billion year old earth, the “obvious” consistency of evolution from common ancestors of similar structures across species right down to the ADP-ATP cycle and DNA.

But a theist is doing the same thing. They START with the assumption that there is a powerful conscious being running both the physical and the human worlds. They marvel at the brilliance of the design of life to support their claim even though it can’t prove their axioms. They marvel at the richness of the human moral and emotional world as more support for the richness and beauty of conscious and good creation.

Logically, there is no logic without assumptions. Deduction needs something to deduce from. I like occams razor and naturalism because my long exposure to it leaves me feeling very satisfied with its ability to describe many things I think are important. Other people like theism because their long exposure to it leaves them feeling very satisfied with its ability to describe and even prescribe the things they think are important.

I am not aware of a definitive way to challenge axioms, and I don’t think there is one at the level I think of it.

• This comment is in reply to some ideas in the comments below.

In my opinion, my rationality is as faith-based as is a religious person’s religious belief.

Among my highest values is “being right” in the sense of being able to instrumentally effect or predict the world. I want to be able to communicate across long distances, to turn combustible fuel into safe transportation, to correctly predict what an interstellar probe will find and to be able to build an interstellar probe that will work. Looking at the world, I see much more success in endeavors like these from science and rationality than from religiosity or appeals to god. And so I adopt rationality as it supports my values.

I also want to raise healthy, happy, “good” children. My one child who dabbles in alcohol, drugs, and petty theft, I am pretty sure I could “help” him by going to church with him. I’ve known many people who are effective at doing things I see as good because, it seems, of their religious beliefs and participation in churches and religious communities. I liked being a Lutheran for a few years. One night I told our pastor that I just didn’t believe in god. He told me he thought half the church had that happening. Even so I couldn’t stay engaged.

I feel the loss of religious faith as a sorrow, or a pain, or a burr under my saddle, or something. But I can’t justify it, or more importantly, I can only pretend to believe, actual belief does not seem to me to be a real option anymore.

And it turns out I have enough “faith” in scientific rationalism that I won’t even pretend I believe in god. I choose to believe that staying consistent with rational principles will payoff more for me and those I care about than will falling back to the more accessible morality of religious faith. It is a leap of faith, especially in light of “rationalists win.” If my son were to become an heroin addict and devote his life to petty theft, jail, and shooting up, AND I could have prevented that by bringing him to church, I will have paid a price for my faith, as much as any Christian Martyr who was harmed or whos family was harmed because he did not deny his Christian belief.

People who think their rationality does not come from a faith they possess remind me of religious people who think their belief in god is just right, that it does not come from a faith that they possess or have chosen.

• I’m not sure which is correct. Not that familiar with utilitarianist nuts and bolts.

As with so many things, if there is more than one way to interpret something there is generally not too much to be gained by interpreting so that there is an error when there is a way to interpret it that makes sense. Clearly if a new charity sets up that takes twice the cost to provide the same benefit, and people switch donations from the cheaper charity to the more expensive one, utility produced has been decreased compared to the counterfactual where the new more expensive charity was not set up.

So whatever terminology you prefer, 1) opportunity cost is a real thing and arguably is the only good way to compare money to food quantitatively, and 2) whatever the terminology, the point of the original article is a decrease in utility from adding a charity, which is a sensible idea and well within the bounds of reasonable interpretation of the title under question.

• I think that if a charity had negative utility, that would imply that burning a sum of money would be preferable to donating that money to that charity.

If there are two charities, one which feeds homeless population for \$3/​day and a 2nd which feeds same population same food for \$6/​day, AND people tend to give some amount of money to one charity or the other, but not both, then it seems pretty reasonable to describe the utility of the more expensive charity as negative. It is not that it would be better to burn my contribution, but rather that I am getting \$3 worth of good from a \$6 donation. But just out and out burning money being superior to donating it is not the only way to interpret negative utility.

If you have \$6 to give towards feeding the homeless, it would be better to burn \$2 and donate \$4 to the cheaper provider than to give the entire \$6 to the more expensive charity. But only in the same sense that it would be better to burn \$3000 and buy a particular car for \$10,000 than to burn no money and buy that exact same car for \$14,000. Whereever there are better and worser deals, burning less than the full savings can be worked in as part of a superior choice. This does not have anything to do with whether these are charities or for profit businesses.

• ‘no pig’ > ‘happy pig + surprise axe’ > ‘sad pig + surprise axe’

Would this also mean

‘no pig’ > ‘happy pig + surprise predator’ > ‘sad pig + surprise predator’ I don’t think nature is generally any better than (some kinds of) farming for prey animals. Should vegans be benefitting from lowering the birth rates among natural animals?

Or for that matter, does it also mean ‘no human’ > ‘happy human + eventual death’ > ‘sad human + eventual death’ Even in nature, all life is alive, and then it dies, almost always in a way it would not choose or enjoy. Does life just suck? Are we bad actors for having children?

• Most vegetarians would think that activities that normally make animals suffer are bad in themselves.

Presumably the moral win in reducing or eliminating the suffering of farmed meat would have more to do with non-vegetarians than vegetarians. But really, is the point here to do something better than is already done, or is to impress vegetarians?

• Would it be ethical to grow meat in a vat without a brain associated with it? Personally, I think pretty clearly yes.

So breeding suffering out of animals would seem to be between growing meat in a vat and what we have now. So it would seem to be a step in the right direction.

We, and animals, almost certainly have suffering because it had survival value for us and animals in the environment in which we evolved. Being farmed for meat is not that environment. I don’t think removing suffering from our farmed animals has a downside. Of course, removing it from wild animals would probably not be a good thing, but would probably correct itself relatively quickly in the failure of non-suffering animals to survive.

• Never heard of Circling until your post. Looked it up, initially find nothing going on in San Diego (California US). I wonder if it is more of a European thing?

If you know how I can find something local to San Diego CA US, please let me know.

• I do think rationality is a niche. I had a conversation with a not-particularly-bright administrative assistant at work where she expressed the teachings of Jehovah’s Witness as straightforward truth. She talked some of the chaos of her life (drugs, depression) before joining them. As I expressed the abstract case for, essentially, being careful about what one believes, it seemed clear enough to me that she had little or nothing to gain by being “right” (or rather adopting my opinion which is more likely to be true in a Bayesian sense) and she seemed to fairly clearly have something to lose. I, on the other hand, have a philosopho-physicist’s values and also value finding regular (non-theological) truths by carefully rejecting my biases, so I was making a choice that (probably) makes sense for me.

When my 14 year old daughter (now 16 and doing much better) was “experimenting” with alcohol, marijuana, and shop-lifting, I had a “come to Jesus” talk with my religious cousin. She told me that I knew right from wrong and that I was doing my daughter no favors by teaching her skepticism above morality. I decided she was essentially correct, and that some of my own “skepticism” was actually self-serving, letting me off the hook for some stealing I had done from employers starting when I was about 15.

I view rationality as a thing we can do with our neocortex. But clearly we have a functional emotional brain that “knows” there are monsters or tigers when we are afraid of the dark and “knows” that girls we are attracted to are also attracted to us. I continue to question whether I am doing myself or my children any real favors by being as devoted to this particular feature of my neocortex as I am.

• Does “value the welfare of others” necessarily mean “consciously value the welfare of others”? Is it wrong to say “I know how to interpret human sounds into language and meaning” just because I can do it? Or do I have to demonstrate I know how because I can deconstruct the process to the point that I can write an algorithm (or computer code) to do it?

The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a computer (running the right program) displays some of the features that signal me that a human is something i should value, why couldn’t I value the computer? We watch animated shows and value and have empathy for all sorts of animated entities. In all sorts of stories we have empathy for robots or other mechanical things. The idea that we cannot value the welfare of a computer flies in the face of the evidence that we can empathize with all sorts of non-human things fictional and real. In real life, we value and have human-like empathy for animals, fishes, and even plants in many cases.

I think the interpretations or assumptions behind this paper are bad ones. Certainly, they are not brought out explicitly and argued for.

• Yes, there is class of investment strategies which go by the name of “liquidity constrained”. If there is a small… market inefficiency out of which you can extract, say, \$100,000/​year but no more, none of the big investment firms would bother—it’s not worth their time. But for an individual it often is.

Can you please say more about these and how to find them?