There’s not enough written about the basic problem with democracy and libertarianism: a large number of humans are incompetent at understanding the world well enough to make sane plans for any reasonable goals, and are unable to follow them even when they do.
“raising the sanity waterline” is a topic that comes up occasionally, but it’s overwhelming and depressing, and I know of no good ways to accelerate it.
By the way, it makes me angry that we talk about median and quintile incomes, and compare that to average debt. Comparing a stock to a flow is bad, and using an average for a non-normal distribution is a crime. We should compare income to median/quintile finance charges (in addition to total debt).
I considered prediction markets, but expect them to have approximately zero impact on the outcomes of projects.
On this, I disagree. A working prediction market makes it much harder to lie to the backers (and harder for proponents to lie to themselves) about probability of success and magnitude of impact.
This is because almost all megaprojects are bad, everyone knows almost all of them are bad, and few people involved with them are trying to behave differently.
On this, I somewhat agree. Which then brings the question “why would anyone buy shares in them”?
It’s not obvious what you’d actually be buying in a “share” of a non-profit-motivated project. What’s your reward for being right (that the project is undervalued and you can buy a share for less than you should) or punishment for being wrong (that the project is overvalued and you should have bought a share in something else)?
You may simply be talking about prediction markets on the outcome of megaprojects. I’d support that, but you should be clear that it’s not a share in the project, it’s a wager on a predicted conditional outcome.
I think that most of what people call “intellectual honesty” would be more accurately called “epistemic humility”. It’s not just about trying to minimize deception and bias, it’s about recognizing that it’s an impossible task, and according the same rights to possibly-wrong beliefs to others as to yourself.
Which isn’t to say that all wrong beliefs are equally acceptable, just that there’s a wider range of reasonable beliefs than you probably realize when you’re discussing/debating.
one-in-a-million event occurring 8 times a month
Off by a thousand. One-in-a-million happens 8 times in Manhattan and a thousand times in China. One-in-a-BILLION is 8 times worldwide.
Leave legality out if it—laws and enforcement is about really generic social behaviors, and is always going to encode a different set of expectations than a nuanced morality. I assert that it’s perfectly moral (noble even) to be legally punished for making a correct moral choice.
Also, separate “correct action in face of high uncertainty” from “correct action if you can read the source code / detect and measure the experiences”. I bias strongly against killing when there’s significant uncertainty about current or future preferences/experiences. I think it’s probably right to kill if you can somehow know the remainder of their life is negative value to them.
In fact, I’m not sure that any human in constant (or even frequent) deep pain can be considered compos mentis on this topic. By the time the pain is known to be constant, the reaction and anticipation of the pain has altered the person’s cognitive approach.
That said, I try to remain humble in my demands of others. I won’t kill a sentient being for pure altruism, and will in fact put barriers in place to suicide so that someone needs to maintain the desire and expend thought and further pain to achieve it. I don’t actually judge suicide as wrong, or even as a mistake, but I don’t understand the universe or others’ experiences well enough to want to make it easy.
Really, human experience is so short already (a century at most, less for most of us), and it’s going to end regardless of my or the sufferer’s intent. exactly when it ends is far less important than what I can do to make the remaining time slightly less unpleasant.
 meaning I don’t think I’ll ever have sufficient evidence that killing them would benefit them more than other actions I can take. There are other utilitarian reasons I might be willing to kill, such as preventing 3^^3 dust specs. That’s not what this post is about though—it’s altruistic, but not toward the killed victim.
Preference utilitarianism really falls apart when you can’t trust that expressed preferences are true and durable.
And I’d like a little more definition of “autonomy” as a value—how do you operationally detect whether you’re infringing on someone’s autonomy? Is it just the right to make bad decisions (those which contradict stated goals and beliefs)? Is it related to having an non-public (or non-consistent) utility function ?
Is there any single-player sandbox, testing mode, or story available?
Ehn, this is very hard to predict, because it’ll depend a whole lot on whether AIs are evolved, created, or copied from humans (or rather, what mix of those mechanisms is used to bootstrap and improve the AIs). To the extent that AIs learn from or are coerced to human “values”, they’ll likely encode all human emotions. To the extent that they’re evolved separately, they’ll likely have different, possibly-overlapping sets.
As you hint at, human emotions are evolved strategies for decision-making. Since an AI is likely to be complex enough that it can’t perfectly introspect itself, and it’s likely that at least many of the same drives for expansion, cooperation, and survival in the face of competitive and cooperative organisms/agents will exist, it seems reasonable to assume the same emotions will emerge.
It’s all stories. There probably _is_ an underlying physical reality, but no humans experience it directly enough to have goals about it. I don’t think your dichotomy is about reality vs stories. From your examples and descriptions, it seems to be about long-term vs short-term stories, or perhaps deep vs shallow stories.
The manager/ceo/board/investor acceptance of stories is only for a few years. Eventually customers won’t agree, and it collapses anyway. conversely, there are plenty of examples of objectively worse products that did better in the marketplace, because the story is the only thing that matters.
Paper currency is a good example of a story with the weight of reality for a good chunk of humanity.
I disagree with 2. I know that at least one human has qualia (or at least that the universe has at least one qualia-experiencer which seems localized in one human), but I have no operational definition or test which would allow me to share that knowledge OR to detect it outside myself.
I don’t think I’ve seen people mention whether change over time is a factor in thinking/qualia/suffering? If so, then even elemental particles change in the fields they experience and react to.
You’re correct—I was responding too simply to the intro, which IMO exemplifies the confusion inherent in the content of the article. There’s some good stuff there about game design, about the subjective experience of value persistence and how it varies with price acquisition mechanism. But it’s tied in with unstated (and IMO incorrect) ideas about what ownership is.
“learned something along the way” is the wrong level. Specify what you learned and make a conscious evaluation of whether that knowledge has value in future production. Search/exploit is fractal and recursive: you’re searching for search strategies while executing such strategies to search for production knowledge. Turtles all the way down.
Is it this: https://en.wikipedia.org/wiki/Travelling_Salesman_(2012_film) ?
I kind of understand the downvotes (it’s not particularly educational, and only incidental to common LW interests), but I don’t agree. I’m happy to see even tangentially-related things that make rationalists smile.
Wait. I’m advocating _NOT_ experimenting very much with your first bridge—build a fairly standard one first. You will learn a lot in doing so, and end up with a bridge that works. My point is that you have to both produce and learn at the same time, not do one then the other.
I bookmark /daily and pretty rarely look at home, almost never any other view. I don’t much care about events, but I also don’t care about many topics that get posts, so it doesn’t bother me much if I need to ignore a few more entries.
I like the identification of the different things you get from a bridge (learning and transport), but I believe success for a startup (or for any endeavor) comes from _NOT_ separating the goals. You must find a way to excel at both goals, and the goals you haven’t stated (like showing potential partners and customers that you’re worth investing in (this goal probably overrides all others for some phases of your life/project).
Building each bridge must be done in a way that you learn details along the way, and modify your plan accordingly. And apply that learning to the next bridge, when you can make bigger changes. Outrageous ideas (high-risk, high-reward) can be simulated or tried in small/unimportant ways, generally funded by success of prior projects. Which means you have to have successful projects before you can take risks.
This recursive strategy (do some small/safe good, use the rewards to fund bigger/harder/riskier good, use those rewards for still bigger things, etc.) applies at almost every level, from individual to company to industry to civilization.
Amusing, but kind of meaningless—there is no territory behind that map. Replace “particle” with “collection of particles” and you get roughly the same argument. Replace with “large collection of particles” and it starts to fall apart. Replace with “person” and you get a very different conclusion, with no real change in the fundamentals.
Particles have behaviors, it’s just that they’re simple enough to model really well. As collections get bigger, models get more statistical and error-prone. Where is the line between “unmodeled behavior” and “choice”?
Until you give me an operational test for thinking or suffering, I can’t answer what things are capable of it.
Some options for addressing this:
1) Be more specific in your probabilities. What experiences are included or excluded from these predictions? Often, this exercise will show you that you have unreasonable estimates for one of these figures, which may or may not bring your beliefs into consistency.
2) Recognize that these probability estimates are pretty wild guesses, and accept that they’re probably wrong. Inconsistent beliefs necessarily include falsehoods, but that doesn’t mean you have enough information to improve them.
3) See if you can gather any evidence for some of the intermediate probabilities you’re working with. These may give hints toward which of them to adjust.
That’s a lot of writing to say “ownership is the wrong word to use for many rights and abilities”. If you say “currently allowed to use in this way”, most of the confusion goes away.