It’s a noticed gap in your knowledge.
Link doesn’t seem to work.
My best guess: There’s a difference between reviewing ideas and exploring them. Reviewing ideas allows you to understand concepts, think about them and talk about them, but you’re looking at material you already have. Consider someone preparing a lecture well-they’ll make sure that they have no confusion about what they’re covering, and write eloquently on the topic at hand.
On the other hand, this is thinking along pre-set pathways. It can be very useful for both learning and teaching, but you aren’t likely to discover something new. Exploring ideas, by contrast, is looking at a part of idea space and then seeing what you can find. It’s thinking about the implications of things you know, and looking to see if an unexpected result shows up, or simply considering a topic and hoping that something new on the subject occurs to you.
“The more liberal policies you pass, the more likely it is any future policy will be fascist.”
Sadly this one is likely true irl. When you have a government that passes more and more laws, and does not repeal old laws, then the degree of restriction of people’s lives increases monotonically. This creates a precedent for ever more control, until the end is either a backlash or tyranny.
Not Kaj, but shame and self-concept (damaging or otherwise) are thoughts (or self-concept is a thought and shame is an emotion produced by certain thoughts). It seems obvious that people with a greater tendency to think will be at greater risk of harmful thoughts. Of course, they’ll also have a better chance of coming up with something beneficial as well, but that doesn’t strike me as likely to cancel out the harm. Humans are fairly well adapted for our intellectual and social niche; there are a lot more ways for introspection to break things than to improve them.
Happy Petrov Day!
...? “Winning” isn’t just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we’re attaining. It shouldn’t be hard to delineate them.
It should look like, “This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn’s self-awareness techniques and learned to accept their nervousness on a first date...” It might even look like, “This person developed a new technology and is currently working on a startup to build more prototypes.”
In none of these cases should it be hard to explain how we’re winning, nor should Tim’s “not looking carefully enough” be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we’re winning, or do you merely believe you believe it?
This is simultaneously horrifying and incredibly comforting. One would hope that people would be orders of magnitude better than this. But it also bodes very well for the future prospects of anyone remotely competent (unless your boss is like this...)
“True. Equalizing the influence of all parties (over the long term at least) doesn’t just risk giving such people power; it outright does give them power. At the time of the design, I justified it on the grounds that (1) it forces either compromise or power-sharing, (2) I haven’t found a good way to technocratically distinguish humane-but-dumb voters from inhumane-but-smart ones, or rightly-reviled inhumane minorities from wrongly-reviled humane minorities, and (3) the worry that if a group’s interests are excluded, then they have no stake in the system, and so they have reason to fight against the system in a costly way. Do any alternatives come to your mind?”
1. True, but is the compromise beneficial? Normally one wants to compromise either to gain useful input from good decision makers, or else to avoid conflict. The people one would be compromising with here would (assuming wisdom of crowds) be poor decision makers, and conventional democracy seems quite peaceful. 2. Why are you interested in distinguishing humane-but-dumb voters from inhumane-but-smart ones? Neither one is likely to give you good policy. Wrongly-reviled humane minorities deserve power, certainly, but rebalancing votes to give it to them (when you can’t reliably distinguish them) is injecting noise into the system and hoping it helps. 3. True, but this has always been a trade-off in governance-how much do you compromise with someone to keep the peace vs. promote your own values at the risk of conflict? Again, conventional democracy seems quite good at maintaining peace; while one might propose a system that seeks to produce better policy, it seems odd to propose a system that offers worse policy in exchange for averting conflict when we don’t have much conflict.
“I may have been unduly influenced by my anarchist youth: I’m more worried about the negative effects of concentrating power than about the negative effects of distributing it. Is there any objective way to compare those effects, however, that isn’t quite similar to how Ophelimo tries to maximize public satisfaction with their own goals?”
Asking the public how satisfied they are is hopefully a fairly effective way of measuring policy success. Perhaps not in situations where much of the public has irrational values (what would Christian fundamentalists report about gay marriage?), but asking people how happy they are about their own lives should work as well as anything we can do. This strikes me as one of the strongest points of Ophelimo, but it’s worth noting that satisfaction surveys are compatible with any form of government, not just this proposal.
Hopefully this doesn’t come across as too negative; it’s a fascinating idea!
Enye-word’s comment is witty, certainly, but “this is going to take a while to explain” and “systematically underestimated inferential distances” aren’t the same thing. Similar yes, but there’s a difference between something taking a while to explain, while addressing X so you can explain Y which is a prerequisite for talking about Z, while your interlocutor may not understand why you’re not just talking about Z, and something just taking a while to explain!
For example, if someone asked me about transhumanism, I might have to explain why immortality looks biologically possible, and how reversal tests work so we’re not just stuck with the “death gives meaning to life” intuition, and the possibility of mind uploading to avoid a Malthusian catastrophe, and the evidence for minds being a function of information such that uploading looks even remotely plausible… Misunderstandings are all but guaranteed. But if someone asked me about the plot of Game of Thrones in detail, there would be far less chance of misunderstanding, even if it took longer to explain.
Also, motivation and “tactile ambition” aren’t the same thing either. Tactile ambition sounds like ambition to do a specific thing, rather than to just “do well” in an ill-defined way. Someone might be very motivated to save money, for instance, and spend a lot of time and energy looking for ways to do so, yet not hit on a specific strategy and thus never develop a related tactile ambition. Or someone might have a specific ambition to save money by eating cheaply, as in the Mr. Money Mustache example, yet find themselves unmotivated and constantly ordering (relatively expensive) pizza.
That said, why “tactile ambition” rather than something like “specific ambition”?
Very interesting idea! The first critique that comes to mind is that the increased voting power given to those whose bills are not passed risks giving undue power to stupid or inhumane voters. Normally, if someone has a bad idea, hopefully it will not pass, and that is that. Under Ophelimo, however, adherents of bad ideas would gather more and more votes to spend over time, until their folly was made law, at least for a time. It’s also morally questionable-deweighting someone’s judgments because they have been voting for and receiving (hopefully) good things may satisfy certain conceptions of fairness (they’ve gotten their way; now it’s someone else’s turn), but it makes less sense in governance, where the goal should be to produce beneficial policies, rather than to be “fair” if fairness yields harmful decisions.
The increased weight given to more successful predictors seems wise. While this might make the policy a harder sell (it may seem less democratic), it also ensures that the system can focus on learning from those best able to make good decisions. It’s interesting that you’re combining this (a meritocratic element) with the vote re-balancing (an egalitarian element). One could imagine this leaning to a system of carefully looking to the best forecasters while valuing the desires of all citizens; this might be an excellent outcome.
An obvious concern is people giving dishonest forecasts in an effort to more effectively sway policy. While this is somewhat disincentivized by the penalties to one’s forecaster rating if the bill is passed, and the uncertainty about what bills may pass provides some disincentive to do this even with disfavored bills (as you address in the article), I suspect more incentive is needed for honesty. Dishonest forecasting, especially predicting poor results to try to kill a bill, remains tempting, especially for voters with one or two pet issues. If someone risks losing credibility to affect other issues, but successfully shot down a bill on their favorite hot button issue, they very well may consider the result worth it.
Finally, there is the question of what happens when the entire electorate can affect policy directly. In contemporary representative democracy, the only power of the voters is to select a politician, typically from a group that has been fairly heavily screened by various status requirements. While giving direct power to the people might help avoid much of the associated corruption and wasteful signalling, it risks giving increased weight to people without the requisite knowledge and intelligence to make good policy.
Possibility-if panspermia is correct (the theory that life is much older than Earth and has been seeded on many planets by meteorite impacts), then we might not expect to see other civilizations advanced enough to be visible yet. If evolving from the first life to roughly human levels takes around the current lifetime of the universe, rather than of the Earth, not observing extraterrestrial life shouldn’t be surprising! Perhaps the strongest evidence for this is that the number of codons in observed genomes over time (including as far back as the Paleozoic) increases on a fairly steady logarithmic trend, which extrapolates back to shortly after the birth of the universe.
What do you mean by moral facts? It sounds in context like “ways to determine which values to give precedence to in the event of a conflict.” But such orders of precedence wouldn’t be facts, they’d be preferences. And if they’re preferences, why are you concerned that they might not exist?
This is exactly the kind of learning and flexibility that we’re trying to get better at here. There’s not much to say but congratulations, but it’s still worth saying.
The MtG article is called Stuck in the Middle With Bruce by John Rizzo. Not sure how to link, but it’s
The article is worth your time, but if you want a summary-there appears to be a part of many people’s minds that wants to lose. And often winning is as much a matter of overcoming this part of you (which the article terms Bruce) as it is overcoming the challenges in front of you.
Humans are made of atoms that are not paperclips. That’s enough reason for extinction right there.
It’s an evolved predisposition, but does that make it a terminal value? We like sweet foods, but a world that had no sweet foods because we’d figured out something else that tasted better doesn’t sound half bad! We have an evolved predisposition to sleep, but if we learned how to eliminate the need for sleep, wouldn’t that be even better?
Yes. I wouldn’t be surprised if this happened in fact.
The strongest argument that an upload would share our values is that our terminal values are hardwired by evolution. Self-preservation is common to all non-eusocial creatures, curiosity to all creatures with enough intelligence to benefit from it. Sexual desire is (more or less) universal in sexually reproducing species, desire for social relationships is universal in social species. I find it hard to believe that a million years of evolution would change our values that much when we share many of our core values with the dinosaurs. If maiasaura can have recognizable relationships 76 million years ago, are those going out the window in the next million? It’s not impossible, of course, but shouldn’t it seem pretty unlikely?
I think the difference between us is that you are looking at instrumental values, noting correctly that those are likely to change unrecognizably, and fearing that that means that all values will change and be lost. Are you troubled by instrumental values shifts, even if the terminal values stay the same? Alternatively, is there a reason you think that terminal values will be affected?
I think an example here is important to avoid confusion. Consider Western Secular sexual morals vs Islamic ones. At first glance, they couldn’t seem more different. One side is having casual sex without a second thought, the other is suppressing desire with full-body burqas and genital mutilation. Different terminal values, right? And if there can be that much of a difference between two cultures in today’s world, with the Islamic model seeming so evil, surely values drift will make the future beyond monstrous!
Except that the underlying thoughts behind the two models aren’t as different as you might think. A Westerner having casual sex knows that effective birth control and STD countermeasures means that the act is fairly safe. A sixth century Arab doesn’t have birth control and knows little of STDs beyond that they preferentially strike the promiscuous-desire is suddenly very dangerous! A woman sleeping around with modern safeguards is just a normal, healthy person doing what they want without harming anyone; one doing so in the ancient world is a potential enemy willing to expose you to cuckoldry and disease. The same basic desires we have to avoid cuckoldry and sickness motivated them to create the horrors of Shari’a.
None of this is intended to excuse Islamic barbarism. Even in the sixth century, such atrocities were a cure worse than the disease. But it’s worth noting that their values are a mistake much more than a terminal disagreement. They’re thinking of sex as dangerous because it was dangerous for 99% of human history, and “sex is bad” is easier meme to remember and pass on than “sex is dangerous because of pregnancy risks and disease risks, but if at some point in the future technology should be created that alleviates the risks, then it won’t be so dangerous”, especially for a culture to which such technology would seem an impossible dream.
That’s what I mean by terminal values-the things we want for their own sake, like both health and pleasure, which are all too easy to confuse with the often misguided ways we seek them. As technology improves, we should be able to get better at clearing away the mistakes, which should lead to a better world by our own values, at least once we realize where we were going wrong.
The values you’re expressing here are hard for me to comprehend. Paperclip maximization isn’t that bad, because we leave a permanent mark on the universe? The deaths of you, everyone you love, and everyone in the universe aren’t that bad (99% of the way from extinction that doesn’t leave a permanent mark to flourishing?) because we’ll have altered the shape of the cosmos? It’s common for people to care about what things will be like after they die for the sake of someone they love. I’ve never heard of someone caring about what things will be like after everyone dies-do you value making a mark so much even when no one will ever see it?
“...our descendants 1 million years from now will not be called humans and will not share our values. I don’t see much of a reason to believe that the values of my biological descendants will be less ridiculous to me, than paperclip maximization.”
That depends on what you value. If we survive and have a positive singularity, it’s fairly likely that our descendants will have fairly similar high level values to us: happiness, love, lust, truth, beauty, victory. This sort of thing is exactly what one would want to design a Friendly AI to preserve! Now, you’re correct that the ways in which these things are pursued will presumably change drastically. Maybe people stop caring about the Mona Lisa and start getting into the beauty of arranging atoms in 11 dimensions. Maybe people find that merging minds is so much more intimate and pleasurable than any form of physical intimacy that sex goes out the window. If things go right, the future ends up very different, and (until we adjust) likely incomprehensible and utterly weird. But there’s a difference between pursuing a human value in a way we don’t understand yet and pursuing no human value!
To take an example from our history-how incomprehensible must we be to cavemen? No hunting or gathering-we must be starving to death. No camps or campfires-surely we’ve lost our social interaction. No caves-poor homeless modern man! Some of us no longer tell stories about creator spirits-we’ve lost our knowledge of our history and our place in the universe. And some of us no longer practice monogamy-surely all love is lost.
Yet all these things that would horrify a caveman are the result of improvement in pursuing the caveman’s own values. We’ve lost our caves, but houses are better shelter. We’ve lost Dreamtime legends, Dreamtime lies, in favor of knowledge of the actual universe. We’d seem ridiculous, maybe close to paperclip-level ridiculous, until they learned what was actually going on, and why. But that’s not a condemnation of the modern world, that’s an illustration of how we’ve done better!
Do you draw no distinction between a hard-to-understand pursuit of love or joy, and a pursuit of paperclips?