You can be more about what actions are likely to save a life than about what actions are likely to save many lives.
NancyLebovitz
There’s a lovely bit in Egan’s Diaspora showing the viewpoint character understanding a concept from physics by applying it in various contexts.
More generally, I don’t know if much is known about how people get from input to understanding.
Possibly of interest: Mathsemantics, which grew out of a project to find employees who understood what numbers mean. The book (about a questionaire for the purpose) is very interesting, the articles listed mostly look minor except for the one about grokduelling (you win if you understand the other side better), and they’re looking for research ideas.
Part of why the future looks absurd is that people want novelty—not absolute novelty and not all the time, but a lot of smart and weird people are working on making changes, some of which will catch on. A futurist isn’t going to be smart and weird enough to predict all the possible changes being offered or which ones will have a long term effect.
It’s not just that technological change builds on itself, so does social change. I don’t think it was completely obvious that the civil rights movement would contribute to gay marriage becoming a serious political issue.
No matter how hard you try, you are of your time. You can expand the range of your imagination, but the future outnumbers you.
I’m still working on the question of why the future isn’t just unpredictable, it’s absurd. Maybe there’s something about human cultures which requires limiting both what people do and what people can imagine anyone doing to a small part of the range of possibilities.
Part of why the future looks absurd is that people want novelty—not absolute novelty and not all the time, but a lot of smart and weird people are working on making changes, some of which will catch on. A futurist isn’t going to be smart and weird enough to predict all the possible changes being offered or which ones will have a long term effect.
It’s not just that technological change builds on itself, so does social change. I don’t think it was completely obvious that the civil rights movement would contribute to gay marriage becoming a serious political issue.
No matter how hard you try, you are of your time. You can expand the range of your imagination, but the future outnumbers you.
I’m still working on the question of why the future isn’t just unpredictable, it’s absurd. Maybe there’s something about human cultures which requires limiting both what people do and what people can imagine anyone doing to a small part of the range of possibilities.
It’s my impression that men and women are permitted somewhat different sets of emotions—men are freer to show anger, women are freer to show sadness. And that showing emotion is more permitted now than it was a few decades ago.
As far as I can tell, it’s possible to be emotional (or at least fairly emotional) and logical at the same time, so long as the emotion isn’t territorial attachment to an idea.
If people are that much more trusting when they’re distracted, then it’s important not to multi-task if you need to evaluate what you’re looking at. Maybe it’s just important to not multi-task.
“Mathematics is beautiful” + “Reality is not like mathematics” doesn’t add up to “Reality is ugly”.
“That which can be destroyed by the truth should be”. I’ve seen this attributed to P.C. Hodgell, but without enough detail to check on it.
Is there a well-defined difference between the shape of one’s mental machinery and its limited computing power?
Vision may be the strongest tool for getting past abstraction for most people, but I recommend putting the other parts of sensory experience on the list, too.
Are there people who say they’re depressed because life is meaningless? I’m not an expert on the subject, but I’ve never heard of any.
There’ve been several mentions of obesity as a primary cause of depression. I haven’t heard of fat people tending to be more depressed than non-fat, but maybe I’ve missed something. Do you mean obesity in the medical sense? That’s actually just fair-to-middling fat. (See The BMI Project for what those numbers mean.) Or do you mean being incapacitated by one’s weight?
Good Mood by Julian Simon might be of interest. He beat back quite a serious depression when he realized that it had roots in the way he thought.
My impression is that most depression carries thoughts of something being wrong with oneself and/or the universe and/or one’s environment, but it’s generally not as philosophical as a belief that the universe is meaningless.
- 29 Dec 2009 14:26 UTC; 6 points) 's comment on Existential Angst Factory by (
- 29 Dec 2009 6:08 UTC; 5 points) 's comment on Existential Angst Factory by (
Mysteries are an extremely popular pro-truth art form, but they have the limits of being fiction, and are generally not about finding out anything really surprising or painful. Offhand, I can’t think of any mysteries with much about the social or psychological consequences of finding out that someone you knew and liked was murderer.
There’s been increasing social pressure to tell the truth about at least some aspects of sex. The subject used to be a lot more blanked out in the public sphere.
There’s a lot more truth floating around about war than there used to be. and generally (at least on the left) a lot of respect for investigative journalism. (That one may be biased in favor of some outcomes—I’m not sure.)
I agree that there isn’t a general pressure towards truth-telling. Just getting somewhat more truth in some limited but fraught areas has been remarkably difficult.
That’s one sneaky parable—seems to point in a number of interesting directions and has enough emotional hooks (like feeling superior to the Pebble Sorters) to be distracting.
I’m taking it to mean that people can spend a lot of effort on approximating strongly felt patterns before those patterns are abstracted enough to be understood.
What would happen if a Pebble Sorter came to understand primes? I’m guessing that a lot of them would feel as though the bottom was falling out of their civilization and there was no point to life.
And yes, if you try to limit the a mind that’s more intelligent than your own, you aren’t going to get good results. For that matter, your mind is probably more intelligent than your abstraction of your mind.[1]
It sounds as though an FAI needs some way to engage with the universe which isn’t completely mediated by humans.
We can hope we’re smarter than the Pebble Sorters, but if we’ve got blind spots of comparable magnitude, we are by definition not seeing them.
[1]On the other hand, if you have problems with depression, there are trains of thought which are better not to follow.
It seems to me that an FAI would still be in an evolutionary situation. It’s at least going to need a goal of self-preservation [1] and it might well have a goal of increasing its abilities in order to be more effectively Friendly.
This implies it will have to somehow deal with the possibility that it might overestimate its own value compared to the humans it’s trying to help.
[1] What constitutes the self for an AI is left as a problem for the student.
Richard, I’m looking at the margins. The FAI is convinced that it’s humanity’s only protection against UFAIs. If UFAIs can wipe out humanity, surely the FAI is justified in killing a million or so people to protect itself, or perhaps even to make sure it’s capable of defeating UFAIs which have not yet been invented and whose abilities can only be estimated.
If you once tell a lie, the truth is ever after your enemy.
That isn’t true.
I’ve told lies when I was a kid. If I got caught I gave up rather than doing an epistomological attack.
Richard Kennaway: “I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
If I say I feel something, I’m talking about an emotion. I don’t intend it to be an objective statement about the world, and I’m not offended if someone says it doesn’t apply to everyone else.
To Richard Kennaway:
Your original point, which I didn’t read carefully enough:
“I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
“No, you don’t” sounds like a chancy move under the circumstances. Have you tried “How sure are you about X?” and if so, what happens?
More generally, statements usually imply more than one claim. If you negate a whole statement, you may think that which underlying claim you’re disagreeing with is obvious, but if the person you’re talking to thinks you’re negating a different claim, it’s very easy to end up talking past each other and probably getting angry at each other’s obtuseness.
My reply: If I say I feel something, I’m talking about an emotion.
You again: That prohibits you from saying “I feel that X”. No emotion is spoken of in saying “I feel that the Riemann hypothesis is true”, or “I feel that a sequel to The Hobbit should never be made”, or “I feel that there is no God but Jaynes and Eliezer (may he live forever) is His prophet”, or in any other sentence of that form. “I feel” and “that X” cannot be put together and make a sensible sentence.
If someone finds themselves about to say “I feel that X”, they should try saying “I believe that X” instead, and notice how it feels to say that. It will feel different. The difference is fear.”
It sounds to me as though you’ve run into a community (perhaps representative of the majority of English speakers) with bad habits. I, and the people I prefer to hang out with, would be able to split “I feel that x” into a statement about emotions or intuitions and a statement about the perceived facts which give rise to the emotions or intuitions.
I believe that “I believe that a sequel to The Hobbit should never be made” is emotionally based. Why would someone say such a thing unless they believed that the sequel would be so bad that they’d hate it?
Here’s something I wrote recently about the clash between trying to express the feeling that strong emotions indicate the truth and universality of their premises and the fact that real world is more complicated.
The universe isn’t set up to reward virtue.
I believe that ethics are an effort to improve the odds of good outcomes. So it’s not that the universe is set up to reward ethics, it’s that ethics are set up to follow the universe.
The challenge is that what we’re taught is good is a mixture of generally useful rules, rules which are more useful to the people in charge than to the people who aren’t, and mere mistakes.
Here’s another Noble Lie: protectionism—that there’s somehow a morally and practically important difference between trading inside your borders and trading outside them. It may not be quite as good as Santa Claus, though.
The idea that torture is efficacious for getting accurate information might be Noble Lie (if you accept that causing pain to someone helpless is a benefit, thus making torture a self-seeking behavior), but that one might be too contentious for most discussions.
I suspect that the hook for adults in the Santa Claus story is a “benefit” of that kind—lying to someone who doesn’t have the capacity to check on what you’re saying.
That five minutes brainstorming is an interesting idea. Would another five minutes spent on looking at your preferred alternative from the points of view of all the interested parties also be a good investment?