It has less benign forms. Governments and other bandits look for wealth and take it. Sometimes those bandits are your friends, family and neighbors. A little giving back is a good thing, but in many cultures demands for help and redistribution rapidly approach 100% – life is tough, and your fellow tribe members, or at least family members, are endless pits of need, so any wealth that can be given away must be hidden if you want to remain in good standing. Savings, security and investment in anything but status are all but impossible. There is no hope for prosperity.
I’m not sure of how literally I should interpret this part. Governments and systems seem to be in a trend of taxing poverty more than they tax wealth, after a certain level of wealth you definitely pay less per dollar earned that someone who’s poor, even considering official taxes alone.
Poor people do seem to be forced to dissipate any extra wealth they accumulate through societal obligations, and for slack and status purchase it seems to definitely hold true, I’m just puzzled by the government thing.
Characters often want change as part of their role. And just as importantly, their role often requires that they can’t achieve that change. The tension between craving and deprivation gives birth to the character’s dramatic raison d’être. The “wife” can’t be as clingy and anxious if the “husband” opens up, so “she” enacts behavior that “she” knows will make “him” close down. “She” can’t really choose to change this because “her” thwarted desire for change is part of “her” role.
I’m conflicted about drawing this kind of conclusions from people behaviour, it opens up a door that allows you to interpret anything any way you like.
More simple explanations are that if a “wife” knows how to interact with the husband in a way that causes him to open up and talk about what’s happening, then the conflict gets resolved and you aren’t observing a clingy and anxious “wife” anymore.
It’s actually hard to communicate openness and communication while you are feeling anxious and clinging, so you’d see a lot of people acting in ways that “discharge” their anxiety, rather than fix their problem. You don’t need to go as far as to postulate that they are actually acting like this “on purpose”.
Even if the “wife” is clearly showing a stereotypical script, it might just be that “she” has no utter clue of what else could be done about “her” situation. “She” could be just assuming that it’s the correct way to face the problem, nag the “husband” until it finally works. Yeah, “she” would likely feel nervous and lost if considering the option of going off script and trying something else, and would avoid doing that because of that. But people have been using “punishments” in contexts where they have no hopes to work for countless millennia now, and there’s no reason to assume everyone just secretly wants the target to persist in unwanted behaviour so they can punish him some more.
There are other circumstances where drawing simpler explanations is harder, and then you can start to wonder if there is this kind of “purpose” in someone’s actions. Self sabotage is definitely a real thing, sometimes. But I think you’d be safer by going with the simplest explanation first, because you can use “secret reasons” as an explanation for everything in psychology.
Aside from this, the post was really good and insightful. It got me thinking about what roles I’m being pushed on and where I’m pushing my friends to.
I often see that people I know make assumptions about me being the rational one of the group, such as assuming I’d commit the stereotypical mistakes of someone who follows Hollywood rationality… which I always found weird as hell, because 1) in other contexts it’s basically a meme that I’m really genre-savvy (for example, I DM in games for the group and people have a habit of worrying at least about the first four-five levels of subversions and recursions of my twists and plots), and so I thought they should realise I’d have saw the possibility of making the obvious cliched mistake coming, and 2) because I never showed any hint of such behaviours and regularly do the opposite thing, but I guess it makes more sense now.
My role, according to them, is to be incredibly devious and intelligent and do the non-supervillain equivalent of having the hero fall in my devious-four-levels-of-deception-trap, and then screw up something obvious such as leaving him unattended to free himself or fail to my own hubris or insert cliched genius mistake x, so the “balance” between intelligence and heart is reaffirmed.
Given what I’ve actually seen of people’s psychology, if you want anything done about global warming (like building 1000 nuclear power plants and moving on to real problems), then, yes, you should urge people to sign up for Alcor.
I realise this is a 13 year old post, but please don’t dismiss global scale problems with the first idea that comes to mind and without doing serious research first, your opinion is (to say the least) really respected on this site and lots of people would assume you were right about it.
By IPCC datas from 2014, electricity and heat production is a mere 35% (total, considering all associated costs) of global emissions. Even if we convinced everyone to switch to electric cars and transportations AND to electric heating, which would not be trivial at all, we’d have curbed emissions by a total 55%.
https://www.ipcc.ch/site/assets/uploads/2018/02/SYR_AR5_FINAL_full.pdf (page 102)
Also by IPCC datas, nuclear phase out will add a 7% cost to what it would take to stop climate change, while each year wasted between 2014 and 2030 by delaying actions increments cost by more or less 3%. Of course, that is due to the low prevalence of nuclear power as an energy source, but it still goes to show that the issue of nuclear energy is far from being the vault key here. (same link as above, page 41)
If you could persuade everyone to build 1000 nuclear plants, switch to electric cars and to electric heating, then you’d also be able to solve the problem in a dozen more ways.
I agree with everything else on the post and that there are worse problems than climate change (though my guess is that it would still increase existential risk by at least 5% if botched, mostly because it would increase the likelihood of someone botching AGI).
Can anyone suggest me good background reading material to understand the technical language/background knowledge of this and, more generally, on decision theory?
I’m puzzled by a really effective activism post that manages to get me to commit to give 10% of my income to charity saying that activism and spreading the cause isn’t an effective way to get things done.
I also think protesting can buy a lot more political shift for a cause than the average hourly pay of the participant. Millions of protesters seem to shift the political landscape a lot more than tens of millions of dollars spent in lobbying and ads.
I shouldn’t pretend I’m worried about this for the sake of the poor. I’m worried for me.
At this point I should just try ask in a poll if there’s a level of intelligence where you eventually stop worrying if you could ever catch up to the level above yourself.
Maybe if you were literally the highest-IQ person in the entire world you would feel good about yourself, but any system where only one person in the world is allowed to feel good about themselves at a time is a bad system.
Well, that’s fricking encouraging.
This was amazingly good.
On a side note:
But things that work from a god’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along. And no individual journal has an incentive to unilaterally switch to early registration and publishing negative results, since it would just mean their results are less interesting than that other journal who only publishes ground-breaking discoveries. From within the system, everyone is following their own incentives and will continue to do so.
You can, as an individual scientist, start praising and giving status to any other scientist who follow stricter guidelines than the average, and comment negatively on any scientist that’s using guidelines that are laxer than the average and your own. Eventually really lax scientist stop having an edge, slightly stricter scientists gain it and the standards in the field move up.
It doesn’t require simultaneous coordination and it’s a rule of thumb any scientist can adopt without harming their own fitness too much.
This was pretty interesting, and pretty different from the kind of content you usually find on LessWrong.
I often see arguments against “spontaneous inconvenient moral behaviour”, such as worrying whether to kill ants infesting your house or stop eating meat, that advocate these behaviours should be replaced with more effective planned behaviours, but I don’t really think most of the first behaviours prevent the others.
Suggesting that someone currently in his house should stop thinking about how to humanly get rid of ants, start working for an hour and using those overtime moneys to donate to ants charity isn’t a feasible model, since most people wouldn’t have a job where they can just take an hour of spare time whenever they want and convert it to extra money. You are converting “fun time” into “care for the ants time”.
Thinking about how you can be more effective to produce charity or moral value is certainly a good idea, 15 minutes of your time can easily improve the charity you can output in the next years by ten times or more without any real drawback, but the kind of “moral rigor” that’s required when one wants to contest a behaviour he doesn’t want to adopt it’s usually the level of rigor that requires someone to drop his career, start working on friendly AI full time and donating every material possession that he doesn’t think it’s needed to keep his productivity high to friendly AI research.
You’ll need a Schelling point about morality if you don’t want to donate your every value to friendly AI research ( if you want to I won’t certainly try to stop you), at some point you have to go “screw it, I’ll do this less effective thing instead because I want to”, and this Schelling point will likely include a lot of behaviours that are spontaneous things you care about but are also ineffective.
Also the way some critiques try to evaluate non-human lives doesn’t really make sense. I agree on a “humans > complex animals > simple animals logic”, but there should be some kind of quantitative relations between the wellbeing of the groups. You can argue that you would save a human over any number of cow and I guess that can sorta makes sense, but there still should be some amount of human pleasure you should be willing to give up to prevent some amount of animal suffering, or you might as well give up on quantitative moral at all.
If one’s suggesting a 1:1000 exchange of human pleasure:animal suffering, you can’t refuse by arguing that you’d refuse a 10:10 exchange.
Inquire about the subjective vs objective duration of that millisecond. If there aren’t any bad surprises there, pick torture before my mind can try to make a guess of how bad it will hurt.
In the torture vs dust specks I choose dust specks if they weren’t allowed to cause ripple effect and if they were guaranteed to be spread with only 1 dust speck for humans. Here there is a similar consideration, how the pain is spread in a time interval so small that it will basically be inconsequential (since he guaranteed that I won’t suffer lasting consequences, I’d fully expect such a pain to fry my brain and have it possibly melt out of my eyes or something).
I’m basically choosing to screw over the future myself of that millisecond to protect all the other future self.
Both decisions should work fine as long as I’m not approached by a large number of Pascal’s muggers, if it risks becoming a trend I should review my decision theory.
For another human… I’d choose torture for the same considerations, if he choose torture I wouldn’t override it, I’d have emotional qualms about overriding his “death” decision, but I likely will.
The math of pain vs pleasure of being alive would likely say my decisions are wrong, but I think the math starts to stop helping in this limit cases, picking death strikes me as a two boxing with Omega (though I think the math there shows you are right went one boxing if you manage to take in the backward causal link). You’ll be pretty glad you choose torture exactly one millisecond after and for the rest of your live, and so will the stranger (unless he was suicidal, but it doesn’t seem I’m allowed to know it before picking).
I think the only… slight divergence of the situation from reality is that the bad guys figured out most of this stuff already (though I doubt they did so explicitly).
There has been a lot of talk about how “the political divide has grown harsher than ever” as if this kind of shift just happened because of random cosmic variations.
What exactly happened is that, invariably in different country, the local “bag guy” wannabe grabs the loudest mic it can get and starts saying something absolutely hateful over and over, doing everything he can to poison the well and just stop people from talking with each other, instead getting the two parties to yell insults at the other one.
Pretty sure democrats didn’t just went “hey, you know that Trump guy? For no real reason, I really hate him and his supporters way more than I hated Romney and his supporters, even though I don’t perceive his communications have taken a harsh shift away from democracy and basic human decency. Let’s abandon debate and go tell them what ignorant dumb faces they have”.
It’s a scarily effective trap, and a strong argument in favour of the tactic the post suggest.
And yes, I know it’s not a helpful argument to say if you want to propose “look, maybe we’d just better agree to sit down and talk politics civilly” to a “bad guy supporter” but I think it would be great if somehow an agreement about how wonderful it would be if we could just agree to shun the next politician who tries to poison the well, no matter which party is he from, also ended up in the discussion.
Creationists lie. Homeopaths lie. Anti-vaxxers lie. This is part of the Great Circle of Life. It is not necessary to call out every lie by a creationist, because the sort of person who is still listening to creationists is not the sort of person who is likely to be moved by call-outs. There is a role for organized action against creationists, like preventing them from getting their opinions taught in schools, but the marginal blog post “debunking” a creationist on something is a waste of time. Everybody who wants to discuss things rationally has already formed a walled garden and locked the creationists outside of it.
This was a very useful insight, I think I had realised it a while ago but didn’t thought it explicitly yet.
Generally the post is pretty good. I think another key point of how civilisation evolved is that the “smarter than you” guy who just goes “hey, I can refuse to play by the rule if I’m effective enough, this way I’ll get an even bigger advantage and be unstoppable, I’m just going to blitzkrieg these schmucks and take everything over” regularly gets ganged up and beaten to the ground by everyone else.
Julius Caesar, Hitler, Napoleon, Genghis Khan, possibly Alexander the Great… the great conquerors who try to impose a new world order seem to either regularly be beaten by an alliance of fed up people or murdered if they don’t go down that way, and I think most of them didn’t honestly saw how their “screw everything, I’ll just play to win” scheme could possibly backfire.
It seems humans, when someone goes “screw the rules”, tend to answer with “well, screw you too”.
“Yeah, I can totally do my master thesis in six months, even it if involves examining a large database of newspaper articles by myself, inventing a methodology to analyse them that translates in quantitative data, invent an observation grid for what people would usually treat as subjective evaluations, mapping and quantifying the business relationships between newspapers and other industries, and generally pushing past the methodology limits that prevented studies I saw so far to actually prove quantitatively that there were in fact a relationship between newspaper relationships with fossil fuels industries and their treatment of climate change in the news, while I know nothing about journalism studies or text analysis. No, my tendency to procrastinate hard or unpleasant things I don’t know how to do won’t be a problem. Why do you ask?”
It took a bit less than one and a half years.
The more I read about simulated humans the more I’m convinced that a hard ban on simulating new humans and duplicating existing one is a key point of what differentiates dystopias too horrible to even grasp and hyper-existential failures from sane futures, at least until we have aligned AI.
He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”
I also think that if we don’t, we run fast into what we can call… Cenobitical Existential Failures? (Cenobites are Hellraiser demons who see excruciating pain as the best thing in the universe).
Or in a lot of very tiny people really happy about hydrogen atoms (or working overtime).
I’d also strongly argue about making this stand before we select untold billions of people who don’t care if they live or die and they outcompete anyone who actually cares out of business.
Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.
Economical Growth has stopped to correlate with nearly all measures of wellbeing for the population in first world nations. We are already more than halfway there it seems.
I’d think that some of these alien civilisation would have figured it out in time, implanted everyone with neural chips that override any world ending decision, kept technological discoveries over a certain level available only to a small fraction of the population or in the hand of aligned AI, or something.
An aligned AI definitely seems able to face a problem of this magnitude, and we’d likely either get that or botch that before reaching the technological level any lunatic can blow up the planet.
How many of the experts in this survey are victims of the same problem? “Do you believe powerful AI is coming soon?” “Yeah.” “Do you believe it could be really dangerous?” “Yeah.” “Then shouldn’t you worry about this?” “Hey, what? Nobody does that! That would be a lot of work and make me look really weird!”
It does seem to be the default response of groups of humans to this kind of crisis. People died in burning restaurants because nobody else got up to run.
“Why should I, an expert in this field, react to the existential risk I acknowledge as a chance as if I were urgently worried, if all the other experts I know are just continuing with their research as always and they know what I know? It’s clear that existential risk is no good reason to abandon routine”.
As in Asch conformity experiment, whee a single other dissenter was enough to break compliance to the consensus, perhaps the example of even a single person who acts coherently with the belief the threat is serious and doesn’t come across as weird could break some of this apathy from pluralistic ignorance. Such examples seems to be one of the main factors in causing me to try to align my effort with my beliefs on what’s threatening the future of mankind twice, so far.
This was a remarkably successful attempt to summarise the whole issue in one post, well done.
On a side note, I think that getting clever people to think as if in the shoes of a cold, amoral AI can be an effective way to persuade them of the danger. “What would you do if some idiot tried to make you cure cancer, but you had near omnipotence and didn’t really cared one bit if humans lived or died?” It makes people go from using their intelligence for arguing why containment would work to use it to think how containment could fail.
When I first met the subject in the sequences I tried to ask me what I would do as an unaligned AI. Most of my hopes for containment died out in half an hour or so.
A common complaint about immigration is “they’re taking our jobs.” For a group whose primary asset is their ability to do labor, this seems pretty fair to characterize as “our resources are being appropriated,” and it’s easy to notice that many billionaires who are made better off by mass immigration support decreasing regulatory barriers to immigration.[Of course, open borders seem like a good idea to economists, and billionaires are more likely to have economist-approved views on economic policy, so I don’t think this is just a ‘self-interest’ story; I just think it’s worth noticing that the same “disenfranchised group having their resources appropriated” story does in fact go through for those groups.]
A common complaint about immigration is “they’re taking our jobs.” For a group whose primary asset is their ability to do labor, this seems pretty fair to characterize as “our resources are being appropriated,” and it’s easy to notice that many billionaires who are made better off by mass immigration support decreasing regulatory barriers to immigration.
[Of course, open borders seem like a good idea to economists, and billionaires are more likely to have economist-approved views on economic policy, so I don’t think this is just a ‘self-interest’ story; I just think it’s worth noticing that the same “disenfranchised group having their resources appropriated” story does in fact go through for those groups.]
Sorry, I guess I could have explained this part more clearly. I agree that the Rural Brits and American Reds like-groups often believe in a narrative about some external power attacking and erasing them (the evil EU ruling council, billionaires engaged in philanthropy, etc...). My point was that the difference in sympathy these group receive from a third party is best explained:
1) by the belief of this third party in the existence of this external power. Most people criticising these groups would believe in China’s violations of human rights but not in evil billionaires controlling the choice on immigration policies.
2) by the strategy these people adopt in defending their culture. If the Tibetan started harassing refugees from a war thorn country I would sympathise with them less than I sympathise with their current attempts to defend their traditions by just practicing them.
I feel like this is missing the core point of the article, which is that the “colonizer / colonized” narrative misses the transition from the ‘traditional cultures’ of Britain and America to universal culture. Why did universalism win in Britain and America? If it was because those places were torn apart in order to exploit the hell out of them, then the flavor of this analysis changes significantly.
First, I think a lot of the universal culture is actually straight from the “traditional cultures” of Britain and America, it’s just harder to see it as something not universal since we grew up in it. Often I feel a cultural barrier that gets in the way of the conversation when I’m discussing certain subjects with Americans on this site, and I’m from Italy, so still in the western culture myself. It is however a complex subject and debating exactly which is what would be pretty hard.
I also think it’s not clear what is considered “traditional cultures” of these places, if we are talking about their cultural traditions from before industrialisation… then those were changed in those place to better fit the requirements industrialisation had. Other western countries started to industrialise as fast as they could because the first ones who did it were starting to gain a military-economical supremacy over them.
Non-western countries weren’t fast enough to adapt or didn’t had enough weapons to stave off who did, so they were colonised, invaded and etc until they either managed to build up an industry and a military or were torn apart to exploit them.
I’m of course generalising a bit, but I think that 90% of this “culture war” was actually a war of might. Industrialisation gives you an edge that everyone wants, so everyone either tries to copy it or is invaded and exploited until they do it anyway.
If nations didn’t have to compete for domination and freedom, I think a lot of them would have picked just some bits of the “universal culture” rather than the whole package, either for inertia or because some bits you can just left out and your population would be better off. (I guess whether that would have been better or worse would require calculating a lot of deaths and of changes in quality of life. A lot of the costs will hit us in the face in the next years if they aren’t prevented, so the question would still be left open anyway).
The bits that these nations would usually pick would be “universal culture” that fits the description suggested in the post, since they would be practices that win over other in a fair fight for culture. But the main driving factor for the expansions of these norms was the increased military and economical effectiveness that came with industrialisation, so we can’t really call Coca Cola an universal winner because we have no idea of how things would have gone in a cultural fight, we just mainly saw a military and economical one.
Human rights and democracy do seem like these cultural universal winners, I gave it some thoughts and realised that yeah, a lot of places seem to have people in it who kinda buy this whole “not being exploited by our local feudal overlords” once they hear the concept. Unfortunately, Coca Cola itself and other… competitive spreaders had a few words against it in a lot of these places.
Also, other cultural practices have expanded peacefully in western countries, but usually they are just exported in other countries as part of the whole industrialisation package, so it would be hard to name them as universal winners.
There’s also the whole subject of mass medias of communications, which I think are pretty effective at overwhelming any kind of culture with new content. I do hope that nazism and fascism aren’t universal winners, and that they managed to take over Germany and Italy because they had just found a way to be louder than anyone else for a while. The same thing can happen with McDonald or action movies or whatever.
This is a really tangled subject, so I guess I was a bit a lot harsh in my comment, but missing those points I mentioned was a rather biased way to look at it.
To summarise, I guess I understood the main idea of the article, and I’m interested in how exactly reality could be shaped to maximise the benefits of “true cultural universal winners” without erasing the parts of local culture that don’t make people miserable.
But I think the post didn’t managed to carve reality at the right joints and confounded different kind of victories.
Edit: I’ve changed my original post a bit because I couldn’t tell if it came across as aggressive and I was starting to really obsess about it.
I’m… kinda puzzled by the questions and the situation described by this post. It seems it’s missing a couple points that are a relevant part of the whole picture. These points are also extremely relevant in the motivations of those who support differently “local conservatives” and “foreign populations that try to defend their cultures” and in most reasoned objections to the spread of “universal ideology” (I’ve also met a large number of stupid objections that argue against it for worse reasons). My position is one of support for the spread of some of the elements of this “universal ideology” and of opposition to the spread of others.
The clear distinctions you can make between Australian Aborigines, Tibetans, Native Americans on a side, and rural British and “American Rednecks” on the other is that in the first group there’s a foreign culture that’s also overwhelming in power that has come to their home and is erasing both their culture and their properties/territories/wellbeing in general. Their cultural erasure it’s also going step by step with an exploitation from the power that’s attempting to erase their culture. In the second group… not at all. Rural British and American Rednecks aren’t certainly seeing their resources appropriated by the powers behind the immigrants. It is only their culture that’s under “siege” and it’s a different kind of siege involving no laws or planned attempts to erase their cultural ways, the attack comes from mere exposure to different ideas and customs. So yeah, it makes perfect sense to sympathise with Tibetans trying to shield what’s left of their culture and not with British trying to do the same, especially since the attempts that elicit different reactions are usually very different in nature. It would take a special kind of fanatic to go bother British trying to have a traditional warm pint of beer with shepherd pies in their pubs (I apologise with any British reading this for stereotyping and not bothering go search a cherished British tradition) because “sushi is better, you uncultured simpletons”. Usually you contest British for trying to defend their culture in ways that make other people miserable or will break a lot of stuff, such as banning immigrations or exiting UE. If Tibetans started throwing rocks and making racist signs against poor North Korean immigrants who are escaping from the persecutions of dictatorships and trying to make a new life for themselves, well support would evaporate fast.
I think the idea of Western Culture that needs to be defended from barbarism often seems to be actually talking about the universal rights, a reasoned attempts to understand what rights every human should be granted. (There is some opposition about Western Culture choosing universal rights for everyone, but most objections to universal rights I’ve heard seem to melt under the base kind of pragmatism that’s required to allow Zeno of Elea to not starve before reaching his kitchen, it just takes starting to think concrete stuff like “okay, then are you okay with being eye-gouged if the other guy’s culture insist it’s really necessary?”). The current set of universal rights fits the Noahide Laws example in spade, they’re awesomely tolerant of everything that don’t involve oppressing people and atrocities and, if applied correctly, would take a lot of fanatism out of the fight for transgender bathrooms. People don’t get that pissed off about the bathrooms, people get really pissed off because of a myriad of bigger and smaller things that oppress category x and then every fight for category x right becomes a crusade for some of them. It would be really hard to get that heated about the bathroom issue by itself, I think. Sadly, coca cola seems to be more competitive than universal rights if things are left to take their course, so we might want to give universal rights a hand there.
I’d also point out that a lot of the “fair fights” that universal culture and colonialism picked were more about bombing the other guys to hell and/or setting up a local corrupt, bloodthirsty dictatorship/protectorate/whatever from which to “buy” their resources for pennies than saying who would win between the Dreamtime and sushi restaurants in a free market fight. It’s a bit weird to say that western/universal culture wins fair fights when it has mostly been exported by weapon superiority. Most of the places where universal culture is replacing their own were first torn apart to exploit the hell out of them. If this war of cultures was an experiment, I’d say this was a hell of a confounder.
I guess that what I’m trying to say is that, if you try to take a step back and look at what’s happening on the whole, the situation goes back to be… not so complicated, at least about the goals we can pick. We can go big in support of universal rights and of attempts to preserve individual cultures that don’t involve deeply problematic strategies. We also go big against large countries invading and exploiting the hell out of small ones and cancelling their culture as they do. Then we can see what problems are actually left after this approach and deal with them.
I’d strongly suggest that anyone looking into this kind of issues explored more the current research on how wealth distribution affects wellbeing. I recommend The Spirit Level by Wilkinson and Pickett as a point to start, is the single most relevant book I’ve read in my whole psychology curriculum.
Countries hardly find themselves better off due to economic growth and GDP alone, what matters the most is how this increased wealth is distributed, and economic growth is getting more and more decoupled with people finances.
A separated problem is that people seem to be pretty bad at finding an anchor against which to evaluate their happiness level. I’d be pretty skeptical of any program that tried to improve the quality of life and used the people subjective reports of happiness as a measurement.
A few years later, another Dutch trader comes to the little kingdom. Everyone asks if he is there to buy tulips, and he says no, the Netherlands’ tulip bubble has long since collapsed, and the price is down to a guilder or two. The people of the kingdom are very surprised to hear that, since the price of their own tulips has never stopped going up, and is now in the range of tens of thousands of guilders. Nevertheless, they are glad that, however high tulip prices may be for them, they know the government is always there to help. Sure, the roads are falling apart and the army is going hungry for lack of rations, but at least everyone who wants to marry is able to do so.
A kingdom having no preconceptions about the state legitimate role in the economy could have just started some tulip farms and hand those to the poor, free of charges. I guess that would lower the price tulips would reach, but given the damage bubbles do to the economy of a country I see as a plus.
There’s also a harsh lesson to be learned on allowing speculations on goods that are “basic necessities”.
Higher education is in a bubble much like the old tulip bubble. In the past forty years, the price of college has dectupled (quadrupled when adjusting for inflation). It used to be easyto pay for college with a summer job; now it is impossible. At the same time, the unemployment rate of people without college degrees is twice that of people who have them. Things are clearly very bad and Senator Sanders is right to be concerned.
The price of education has quadrupled, not the costs. Just fund good public universities and call it a day. Nations that manage to spread education do so by spreading good “cheap” education.
If, for reasons I can’t imagine, getting a degree on Medieval History has a production cost of 100000$, then make a good public online university and call that a day.
I think that if education was deemed a basic necessity good, with governments supplying it at fixed prices for those who can’t afford it, the world would be way better off.
There would be some ifs and how people could qualify for it, but it would definitely be an improvement.