Please officer, don’t arrest me today. I did the zero-ethics version the last week, this is my control week; you would ruin my experiment.
Having a high enough karma that my vote strengths (3 for weak and 10 for strong) are pretty identifiable, so I have to think more about social implications.
I think the other comments show that you are not that identifiable.
Having to decide between strong vs weak vote.
Just always do the weak vote and don’t think about it.
These things are tricky. Humans are good at self-deception, so it is easy to simply do whatever is convenient for me, and invent a story about how doing this is by coincidence also the best way to help everyone else.
(“Why would I send money to fight malaria? If I buy the latest iPhone instead, I am pretty sure some of those components are made in Africa, or at least some minerals are mined there, so I am creating jobs for people who can then spend the extra income on anti-malaria nets. This is even better, because their income is sustainable.” Ignoring the fact that if I send $1000 to effective charity, it means $1000 worth of anti-malaria nets, while spending $1000 on iPhone means that less than $1 ends in hands of someone who would need such net.)
On the other hand, reversed selfishness is not philanthropy. Focusing on not having any personal benefit means avoiding all win/win solutions; which is really bad, because these are likely more sustainable than the alternatives. This is about signaling virtue, perhaps to oneself. (By choosing the option that gives you no personal benefit, you send a costly signal that you are not motivated by the personal benefit in the first place.)
If you can’t trust yourself, perhaps you should seek opinion of the people whose opinion you respect. Yes, even that has the same problem on a higher level—depending on which conclusion you want to reach, those people you will be motivated to ask—but at least it is not under your direct control; they may surprise you.
But ultimately, I think the answer is: do the best option that is sustainable for you. You may need some experimenting to find out what exactly it is. Also the answer may change later.
Alternative hypothesis: maybe what expands your time horizon is not exercise and meditation per se, but the fact that you are doing several different things (work, meditation, exercise), instead of doing the same thing over and over again (work). It probably also helps that the different activities use different muscles, so that they feel completely different.
This hypothesis predicts that a combination of e.g. work, walking, and painting, could provide similar benefits compared to work only.
Reality is first, our perceptions of it only come later. So, if whatever you currently believe to be true, already is true, adding extra intelligence should not change it.
But if, whatever you currently believe to be true, already is false, there is likely some evidence, and the increased intelligence could give you more ideas where to look for that possible evidence.
For example, if aliens don’t exist, increasing your IQ to 500 will not magically conjure them. It can only allow you to better evaluate the existing knowledge, and design smarter experiments… but at the end, the conclusion will be the same. Only now you will know that you believe what you believe for better reasons than previously, because now you also checked X, Y, and Z.
There is no economical reason to optimize for your happiness, if you can’t easily switch to a competing platform. Maybe unhappy people click more ads, who knows… But for the sake of thought experiment let’s assume that our corporate overlords are benevolent.
Could this feature be somehow abused? The first idea that comes to my mind is, imagine that I hate you and decide to spread some nasty rumors about you. Let’s assume that we are already connected as “friends”. So I use this feature to lite-block you. Because you wanted plausible deniability, it means that our mutual friends would still see that we are friends. And they will see what I write about you. But you won’t see it, so you won’t be able to react. But they will assume that you saw it, and could interpret your silence as consent.
In less personal situations, this could be used to make someone look bad by association. Imagine a politician or a celebrity, who “friends” tons of people without thinking twice. So I make hundred fake accounts, use them all to “friend” my target, then all of them will lite-block the target, and start posting some sort of bad stuff. Now when anyone else looks at the target, they will be “uh, this guy has a lot of Nazi friends”. Only the target will not see anything bad during their everyday usage of the platform.
Maybe not the most convincing examples, but generally it seems bad to me that the functionality you want is messing with other person’s vision, without them being aware of it. That feels like something that can be abused. So we have to consider how a bad actor would abuse it.
This would be nice. But in practice I don’t see splitting the audience along many dimensions; rather the differences are shoehorned into sex/gender, sexual orientation, and race (e.g. insisting that “Muslim” is a race). In a social justice debate, an asperger is more likely to be called an asshole than accepted as a disadvantaged minority. Also, the dimension of wealth vs poverty is often suspiciously missing.
If you are a benevolent dictator, it would better to simply have two supermarkets—one with music and one without—and let everyone choose individually where they prefer to shop. Instead of dividing them into categories, assigning the categories to shops, then further splitting the categories into subcategories, etc. But this means treating people as individuals, not as categories. Specifically, trying to help people by helping categories is an XY problem (you end up taking resources from people at the bottom of the “advantaged” categories, and giving them to people at the top of the “disadvantaged” categories; for example Obama’s daughters would probably qualify for a lot of support originally meant for poor people).
Epistemically, social justice is a mixed bag, in my opinion. Some good insights, some oversimplifications. Paying attention to things one might regularly miss, but also evolving its own set of stereotypes and dogmas. It is useful as yet another map in your toolbox, and harmful when it’s the only map you are allowed to use.
Do I understand it correctly that in Chess and Go it seems like DeepMind is capable of strategic thinking in a way it cannot do in StarCraft II? If yes, then how would Chess/Go need to be changed to generate the same problem?
Is it just a quantitative thing, e.g. you would make a Chess-like game played on a 1000×1000 board with thousands of units, and the AI would become unable to find strategies such as “spend a few hundred turns preparing the hundreds of your bishops into this special configuration where each of them is protected by hundred others, and then attack the enemy king”?
Or would you rather need rules like “if you succeed to build a picture of Santa Claus from your Go stones, for the remainder of the game you get an extra move every 10 turns”? Something that cannot be done halfway, because that would only have costs and no benefit, so you can’t discover it by observing your imaginary opponents, because you imagine your opponents as doing reasonable things (i.e. what you would have done) with some noise introduced for the purpose of evolution.
Perhaps in the ancestral environment, dating advice tended to be unhelpful, if not an outright sabotage from the competition
Perhaps this is true for most dating advice today, too.
Yes, there is also some good advice out there, but the problem is that if you can distinguish good advice from bad advice, you probably don’t need the advice anymore.
Polgár was an awesome parent, but I believe he seriously underestimated (in fact, completely dismissed) the effect of IQ. He should have checked his genetic privilege. On the other hand, seems like the “hundred Einsteins” experiment could still work if you’d start with kids over e.g. IQ 130 (or kids of parents with high IQ, so you can start the interventions soon without worrying about measuring IQ at very young age). Two percents of population, that’s still a lot, in absolute numbers.
Unfortunately, I am not a billionaire, so my enthusiasm about this project is irrelevant.
The missionaries will not travel in geography-space, but in subculture-space.
For a mostly online movement, the important distances are not the thousands of miles, but debating on different websites, having different conferences, etc. (Well, the conferences have the geographical aspect, too.)
Thank you, this is very interesting!
Seems to me the most imporant lesson here is “even if you are John von Neumann, you can’t take over the world alone.”
First, because no matter how smart you are, you will have blind spots.
Second, because your time is still limited to 24 hours a day; even if you’d decide to focus on things you have been neglecting until now, you would have to start neglecting the things you have been focusing on until now. Being better at poker (converting your smartness to money more directly), living healthier and therefore on average longer, developing social skills, and being strategic in gaining power… would perhaps come at a cost of not having invented half of the stuff. When you are John von Neumann, your time has insane opportunity costs.
If I take this literally, it should be relatively simple to generate hundreds of Einsteins or hundreds of John von Neumanns. I mean, taking hundred random healthy kids and giving them best education, that should be easily within powers of any greater country.
Actually, any billionaire could easily do it, and there could even be a financial incentive for him to do: offer these new Einsteins to work for you when they finish their studies. (They would probably be happy to work along the other Einsteins.)
Fair points. My comment was more a result of years (looking at the “kensho” article, yep, it’s already two years) of accumulated frustration, than anything else. Sorry for that.
From my perspective, the skepticism seems surprisingly mild. Imagine a parallel reality, where a CFAR instuctor instead says he found praying to Jesus really helpful… in ways that are impossible to describe other than by analogy (“truly looking at Jesus is like finally looking up from your smartphone”) and claims that Jesus helps him at improving CFAR exercises or understanding people. -- I would have expected a reaction much stronger than “your description does not really help me to start the dialog with Jesus”.
Interestingly, clone of saturn’s comment in that debate seems like a summary of the PNSE paper:
If you think of your current level of happiness or euphoria (to pick a simple example) as the output of a function with various inputs, some of these inputs can be changed through voluntarily mental actions that similarly can’t be directly explained in words and aren’t obvious. Things like meditating long enough with correct technique can cause people to stumble across the way to do this. Some of the inputs can be changed about as easily as wiggling your ears, while others can be much more difficult or apparently impossible, maybe analogous to re-learning motor functions after a stroke.
I may be misremembering things I have read on Slate Star Codex as having them read on Less Wrong. (I wonder how to fix this. Should I keep bookmarks every time something rubs me the wrong way, so that when it happens hundred times I can document the pattern?)
By the way, I don’t think the problem with explaining meditation/enlightenment/Buddhist stuff is going to go away soon. Like, there are entire countries that practice this stuff for thousand years, and… they have hundred schools that disagree with each other, and also nothing convicing to show. A part of that is because communicating about inner content is difficult, but I believe a significant part is that self-deception is involved at some level. I don’t believe that a brain described in Elephant in the Brain simply gets more accurate insights by doing lots of introspection regularly. (Note than in the traditional setting, those insights include remembering your previous lives. Even if no one in the rationalist community buys the part about the previous lives, they still insist that the same process—which led other people to remembering their previous lives—leads to superior insights.)
This will depend on details, such as: if you double someone’s lifespan, have you effectively increased their middle age, or their old age?
If it’s all middle age, then it will be in the interest of our overlords to make us live longer, so that we can be longer productive. The salaries would probably go down, because now you have more time to pay your mortgage (and you are competing on the market with other people who also have more time to pay their mortgages). Also, no one is impressed with your 30 years of experience in given industry, because that’s an average among your competitors.
Education will take longer, because other people will signal their qualities by taking more loans (they now have more time to pay back) and staying at school longer. Mere PhD will only get you a job flipping hamburgers at McDonald.
Death of society’s “old guard” may be serving a useful purpose by destroying calcified institutions and ideas, allowing better ones to bloom.
Somehow I expect this reasoning will only be applied selectively to the poor. (Yes, that includes the middle class.)
In summary, I expect the society will support those forms of anti-aging that prolong the productive years. Which is not bad, because that means more years with health. Just don’t expect that you will be the one who benefits most from your longer life; you will spend most of the extra time in the workplace, working more and receiving less. Enjoy your college, though, those will be the best 30 years of your life!
As usual, all gains will be captured by land owners.
I am happy that someone finally brought into rationalist community some skepticism about meditation, in a way that won’t get dismissed as “nah, u jelly, cos u have no jhanas, u full of dukkha and need some metta to remove your bad karma, bro.”
I was already getting quite nervous about the lack of skepticism. Especially in a community that used to dismiss not only all religion and supernatural claims, but also all kinds of mysterious answers such as quantum woo or emergence… and suddenly went like “look, here is a religion that is totes different, because it’s from the opposite side of the planet, and here is a religious practice that has all benefits and no problems, let’s do it every day” and everyone seems to jump on the bandwagon, and then people start using words from foreign languages and claim to have mysterious experiences that are in principle incommunicable to mere muggles… and I’m like “what the fuck, is this still the Less Wrong I used to know, or have these people been kidnapped and brainwashed?”
To answer you question, if they have been successfully Dunning-Kruger’ed, they’ll probably just be like: “nope, I have an unmediated direct perception of reality, and I know it’s all okay”. Also, if there is any problem with enlightenment, obviously those people Scott mentions have not been truly enlightened.
Truly contrarian position: Should be restricted to uncivil speech.
(No, I don’t actually hold this opinion. But I imagine that an interesting movie could be made using it.)
I suppose “minimax policy” is a shortcut for “assume that your human partner is a complete idiot just clicking things randomly or worse; choose the course of action that prevents them from causing the worst possible disaster; if you afterwards still have some time and energy left, use them to gain some extra points”.
I welcome our condescending AI overlords, because they are probably the best we can realistically hope for.
I once talked about this with a guy who identified as a Marxist, though I can’t say how much his opinions are representative for the rest of his tribe. Anyway… he told me that in the trichotomy of Capital / Land / Labor, human talent is economically most similar to the Land category. This is counter-intuitive if you take the three labels literally, but if you consider their supposed properties… well, it’s been a few decades since I studied economics, but roughly:
The defining property of Capital is fungibility. You can use money to buy a tech company, or an airplane factory, or a farm with cows. You can use it to start a company in USA, or in India. There is nothing that locks money to a specific industry or a specific place. Therefore, in a hypothetical perfectly free global market, the risk-adjusted profit rates would become the same globally. (Because if investing the money in cows gives you 5% per annum, but investing money in airplanes gives you 10%, people will start selling cow farms and buying airplane factories. This will reduce the number of cow farms, thus increasing their profit, and increase the competition in the airplane market, thus reducing their profit, until the numbers become equal.) If anything is fungible in the same way, you can classify it as Capital.
The archetypal example of Labor is a low-qualified worker, replaceable at any moment by a random member of the population. Which also means that in a free market, all workers would get the same wage; otherwise the employers would simply fire the more expensive ones and replace them with the cheaper ones. However, unlike money, workers are typically not free to move across borders, so you get different wages in different countries. (You can’t build a new factory in the middle of USA, and move ten thousand Indian workers there to work for you. You could do it the other way round: move the money, and build the factory in India instead. But if there are reasons to keep the factory in USA, you are stuck with American workers.) But within country it means that as long as a fraction of population is literally starving, you can hire them for the smallest amount of money they can survive with, which sets the equilibrium wage on that level. Because those starving ones won’t say no, and anyone who wants to be paid more will be replaced by those who accept the lower wage. Hypothetically, if you had more available job positions than workers, the wages would go up… but according to Malthus, this lucky generation of workers would simply have many kids, which would fix this exception in the next generation. -- Unless the number of job positions for low-qualified workers can keep growing faster than the population. But even in that case, the capitalists would probably successfully lobby the government to fix the problem by letting many immigrants in. Somewhere on the planet, there are enough starving people. Also, if the working people are paid just as much as they need to survive, they can hardly save money, so they can’t get out of this trap.
Now the category of Land contains everything that is scarce, so it usually goes to the highest bidder. But no matter how much rent you get for the land, you cannot use the rent to generate more of it. So, in long term the land will get even more expensive, and a lot of increased productivity will be captured by the land owners.
From this perspective, being born with a IQ 200 brain is like having inherited a gold mine, which would belong to the Land category. Some people need your for their business, and they can’t replace you with a random guy on the street. The number of potential jobs for IQ 200 people exceeds the number of IQ 200 people, so the employers must bid for your brain. But it is different from the land in the sense that it’s you who has to work using your brain; you can’t simply rent your brain to a factory and let some cheap worker operate it. Perhaps this would be equivalent to a magical gold mine, where only the owner can enter, so if he wants to profit from owning the gold mine, he has to also do all the work. Nonetheless, he gets extra profit from the fact that he owns the gold mine. So it’s like he offers the employer a package consisting of his time + his brain. And his salary could be interpreted as consisting of two parts: the wage, for the time he spends using his brain (which is numerically equivalent to how much money a worker would get for working the same amount of time); and the rent for the brain, that is the extra money compared to the worker. (For example, suppose that workers in your country are paid $500 monthly, and software developers are paid $2000 monthly. That would mean that for an individual software developer, the $500 is the wage for his work, and $1500 is the rent for using his brain.) That means that extraordinarily smart employees are (smaller) part working class, and (greater) part rentier class. They should be reminded that if, one day, enough people become equally smart (whether through eugenics, genetic engineering, selective immigration, etc.), their income will also drop to the smallest amount of money they can survive with.
As I said, no idea whether this is an orthodox or a heretical opinion within Marxism.