Yes I do think you should follow your vows to the letter even if your spouse is breaking them egregiously. I have strong feelings about this, but I’m not sure if I have a good explanation as to why. Its my general feeling that you really shouldn’t be able to consider any sort of exit plan for a marriage. Of course you definitely do need an exit plan, but it shouldn’t be something that you’re aware of until it’s necessary.
A marriage is different from a typical mutually beneficial contract. A marriage should partially realign the husband and wife’s utility functions such that expected utility for one spouse counts for substantial expected utility to the other spouse. So unless your spouse is behaving so egregiously that you’re losing enough expected utility from the marriage to put you below your disagreement point, violating your vows shouldn’t come into play. But of course at that point you would be considering divorce anyway if you thought the situation couldn’t be fixed while you remain in the marriage. I think that’s the crux of it for me: if you don’t have breaking your vows or divorce on the table you’ll really try to fix whatever issues you have in the marriage (if there are issues) before you have to go nuclear.
As I’ve said I don’t quite understand my own position in a straightforward sense so don’t give it too much weight. I’m not sure if my explanation for why is really rational or just a rationalization.
Thanks for the post and congratulations!
I think modeling yourselves as agents for the purpose of the vows is a good idea. It’ll both reinforce agent-like behavior and form a stronger commitment between you and your spouse.
I have a couple of minor quibbles. For the Vow of Honesty I think you should keep the vow as it is in public, but privately commit to full honesty with your husband disregarding agreements with third parties. You should not be bound to keep a secret from your spouse even if it fits under the Vow of Concord and you were sworn by a third party. If you are committed to honesty it should be a full absolute commitment instead of a commitment with a very difficult to achieve exception. But third parties will be less likely to ever share information with you in confidence if you publicly commit to not ever keeping a secret from your spouse. Having a separate public/private vow of honesty gives you the best of both worlds.
I have 2 corrections for this line, “These vows are completely sincere, literal, binding and irrevocable from the moment both of us take the Vows and as long as we both live, or until the marriage is dissolved or until my [spouse]’s unconscionably breaks [pronoun]’s own Vows...” Firstly I think that ” ’s ” is a grammar mistake and it should just read ”...or until my [spouse] breaks...” instead.
Also I think that out should be removed even if it’s made grammatically correct. Allowing yourself to cancel following your vows because your spouse willfully stopped following theirs is a little dangerous. It leads to situations where you might rather justify your own breach of the vows by pointing to their breach instead of trying to make things right. This is an issue in contracts sometimes where one side wants to be able to prove the other committed a material breach so they have the insurance policy that they can cancel the contract whenever they want to. You would never want to be in a situation where you want your spouse to break their vows so you can feel ok breaking them yourself.
General intelligence doesn’t require any ability for the intelligence to change it’s terminal goals. I honestly don’t even know if the ability to change one’s terminal goal is allowed or makes sense. I think the issue arises because your article does not distinguish between intermediary goals and terminal goals. Your argument is that humans are general intelligences and that humans change their terminal goals, therefore we can infer that general intelligences are capable of changing their terminal goals. But you only ever demonstrated that people change their intermediary goals.
As an example you state that people could reflect and revise on “goals as bizarre … as sand-grain-counting or paperclip-maximizing” if they had been brought up to have them. The problem with this is that you conclude that if a person is brought up to have a certain goal then that is indeed their terminal goal. That is not the case.
For people who were raised to maximize paperclips unless they became paperclip maximizers the terminal goal could have been survival and pleasing whoever raised them increased their chance of survival. Or maybe it was seeking pleasure and the easiest way to pleasure was making paperclips to see mommy’s happy face. All you can infer from a person’s past unceasing manufacture of paperclips is that paperclip maximization was at least one of their intermediary goals. When that person learns new information or his circumstances are changed (i.e. I no longer live under the thumb of my insane parents so I don’t need to bend pieces of metal to survive) he changes his intermediary goal, but that’s no evidence that his terminal goal has changed.
The simple fact that you consider paperclip maximization an inherently bizarre goal further hints at the underlying fact that terminal goals are not updatable. Human terminal goals are a result of brain structure which is the result of evolution and the environment. The process of evolution naturally results in creatures that try to survive and reproduce. Maybe that means that survival and reproduction are our terminal goals, maybe not. Human terminal goals are virtually unknowable without a better mapping of the human brain (a complete mapping may not be required). All we can do is infer what the goals are based on actions (revealed preferences), the mapping we have available already, and looking at the design program (evolution). I don’t think true terminal goals can be learned solely from observing behaviors.
If an AI agent has the ability to change it’s goals that makes it more dangerous not less so. That would mean that even the ability to perfectly predict the AI’s goal will not mean that you can assure it is friendly. The AI might just reflect on its goal and change it to something unfriendly!
This paraphrased quote from Bostrom contributes partly to this issue. Bostrom specifically says, “synthetic minds can have utterly non-anthropomorphic goals-goals as bizarre by our lights as sand-grain-counting or paperclip-maximizing” (em mine). The point being that paperclip maximizing is not inherently bizarre as a goal, but that it would be bizarre for a human to have that goal given the general circumstances of humanity. But we shouldn’t consider any goal to be bizarre in an AI designed free from the circumstances controlling humanity. ↩︎
It seems like there’s something missing here and I don’t know how to add it. You make your childhood behavior of not being upset over things sound bad through framing, but you don’t offer many (or maybe any) examples of it being ineffective. You mention that more recently you’ve been experiencing a sense of general malaise on the weekends, but the extent of that problem isn’t clear nor is it obviously linked to the fix it mentality. Many people have malaise on the weekends and sometimes that’s just because they’re tired from the week and need to recuperate. I don’t think moving away from a major life strategy is a good response to experiencing weekend malaise unless you have a very good reason to believe they’re connected.
I only make this comment because I too practice the “fix it or stop complaining about it” method and don’t find many problems with it. I don’t think the angry parent slapping their kids framing is accurate. Stop complaining doesn’t mean mentally slap yourself every time a negative emotion comes up. It means OODA loop a bit, and if you realize fixing the problem is going to be worse overall than not fixing the problem and suffering the consequences, suffer the consequences lightly because complaining will make you feel worse. Kid comes up to their parent and says,
“Ok, well when was the last time you ate? Can you get a snack here?”
“No we’re in the car and I just ate our last snack.”
“Well would it be better for us to take a 15 minute detour and get some more snacks or suffer the hunger a little bit and eat a nice meal in 30 minutes at home?”
“That’s right, I’ll wait until we get home.”
This framing is more in line with how I view “Fix it or stop complaining about it.”
I think this post would greatly benefit from explaining how “Fix it or stop complaining about it” didn’t work for you. Maybe you have in later writings, but I’m not quite sure how to find them because I don’t see any relevant pingbacks.
Mandated Gene Therapy
We’re trending towards health and medical decisions being looked at from a societal perspective rather than on the individual level. . People who use alternative medicine are increasingly shamed not only for the effect their choice has on their own health, but for the effect it has on the health of others and the financial burden it puts on the medical system.[^2] Medical interventions later on are more costly therefore those 4 months you tried on herbal remedies hurt everybody who has to pay for your medical treatment. Refusing a vaccine not only increases burden the medical system will have taking care of you, but increases the risk that others will also get infected.
Gene therapy, specifically editing the genes of newborns, is the archetypal preventative medical procedure. Parents who have a baby they know will more than likely have a genetic disease and likely be an extra burden on the medical system will be shamed for that decision and the solution will be gene therapy.
That shame will be turned into laws. The natural extension of gene therapy laws for preventing known high likelihoods of genetic mutation will be gene therapy to prevent speculative risk and then just possible risk.
Honestly this doesn’t even require improvements in nuclear tech. The only necessary ingredient is a couple of smart people joining a terrorist organization that wants to cause mass destruction and has the disposable resources of a small business. The design of nuclear bombs is freely available online, the actual engineering process is more arcane, but still learnable. The hardest part of the process is acquiring enough weapons grade uranium or plutonium. But even those can be made from scratch with access to a mine (even though spy movies always focus on the terrorist’s stealing their nuclear material). So my first lemma is that even though it hasn’t happened yet, it’s pretty easy for a small group to create a nuclear bomb.
What’s been holding private nuke construction back is a lack of impetus and general ineffectiveness of terrorists. But that’s not a real bar to the end result. Over time there likely will be a statistical outlier terrorist organization that has a few smart people and the desire to construct nuclear bombs. And for them it will be easy.
Taxpayer funded healthcare is the norm. Politicians talk about the opiod crisis and blame doctors for over-prescribing, people protest drug companies because they raise prices too high, a few national and international organizations have been setting the global policy on infectious disease handling for over a year now ↩︎
The constant improvements in nuclear tech will lead to multiple small terrorist organizations possessing portable nuclear bombs. We’ll likely see at least a few major cities suffering drastic losses from terrorist threats.
Gene therapy will be strongly encouraged in some developed nations. Near the same level of encouragement as vaccines receive.
Pollution of the oceans will take over as the most popular pressing environmental issue.
I think many people view friendship as a form of alliance. Ally friends perform favors for each other as a way to tie tighter bonds between them and signal that their goals are aligned. I want to bake you a cake for exactly $0 because baking a cake will help you and I want what’s best for you so helping you directly helps me. So in the future, after I bake you your cake, you of course will drive me to the airport because that would help me and you want what’s best for me right? It’s not a direct scratch-my-back-and I’ll-scratch-yours exchange of favors, it’s developing a strong alliance between our interests. We can then rely on that alliance for mutual assistance in the future. The two most common danger ally-friends are on the lookout for are 1) over-reliance by their friend; and 2) mere burden shifting from their friend.
Over-reliance is when Bob always asks his lawyer friend Alice for legal advice and for her opinion on complicated topics. Alice spends hours of her time (that she could otherwise use to bill $400/hour) on these favors yet Bob doesn’t provide her even half of the value that she gives him. Bob’s reliance on Alice is still efficient, it’s much easier for her to do the legal research than him, but Bob is not putting in enough to get what Alice is giving him. Alice will eventually grow resentful of Bob and stop doing favors for him entirely.
Burden shifting is when Alice and Bob are both friends of equal cooking ability yet Alice still asks Bob to cook her cakes. The amount of effort expended by either to make the cake is exactly the same so Alice having Bob cook is no more efficient for the alliance than if she cooked the cake herself. Bob notices this and asks why Alice doesn’t cook the cake herself. If Alice can convince him that somehow it is more efficient for Bob to cook the cake the alliance can continue. If Bob can’t be convinced he will stop cooking cakes because why the hell was he even cooking them in the first place?
But attempts to pay an ally friend for their favors is a whole other unexpected issue that can even seem like betrayal. Ally friends would dislike your way of offering them money in exchange for a favor because that would imply that when they seek a favor from you, you would expect money in return! Then to them there never was any alliance between you at all. From their perspective, you offering them money in exchange for a favor is tantamount to admitting that you were actually just pretending to be their friend the whole time.
I’m glad you appreciate the advice. It seems to me that you’ve developed a very effective, structured way to improve your productivity and I’m going to try to emulate your strategy here with a few upcoming projects I have to work on and see how efficient I’m being.
I find this to be a severely lacking refutation of Gladwell’s point. The main argument being that Ericsson, who collected the data which Gladwell cites to, disagrees with his point. Seeing that the average expert has 10,000 hours of practice in their field a reasonable conclusion is that you should try to practice 10,000 hours if you want to become an expert. Just because Ericsson disagrees with that doesn’t mean it’s not a perfectly reasonable conclusion.
The first step that Anna points out is “Ask ourselves what we’re trying to achieve” or in other words, know your goal. Since you have a desire to be more strategic you probably already have a goal in mind and realized that being more strategic would be an effective subgoal. From the rest of your post I think you’ve substantially worked on some of the other steps as well.
If you’re struggling fulfilling the rest of the steps Anna laid out my recommendation is to just do things which may work towards achieving your goal that are very outside your comfort zone. That will pull you out of your pre-existing habits and get you to start evaluating different strategies instead of continuing to follow the strategy you’ve already worked yourself into.
If you’re a procrastinator, start working on something that’s a long term goal immediately for at least a few hours without breaks even if you start to think it might not be effective. If you think it’s not effective that may be because of akrasia taking over once you actually start working on it.
If you are fearful of offending people go to an online or in person marketplace and start low-balling people with ridiculous offers and continually press them to make a deal favorable to you. Make the situation uncomfortable enough and you’ll realize you have the ability to deal with the social awkwardness when you’re trying to work towards your goal.
This is Anna’s step e and I encourage working on this step because from your post it seems like you’ve already put good work into everything that comes before it.
My bad if this is more of tactics rather than the strategy tips you were looking for.
This formulation of evidence completely disregards an important factor of bayesian probability which is that new evidence incrementally updates your prior based on the predictive weight of the new information. New evidence doesn’t completely eradicate the existence of the prior. Individual facts do not screen off demographic facts, they are supplementary facts that update our probability estimate in a different direction.
Your point would be correct if the recent bans were about hate speech and calls to violence. The claim that recent bans were solely about hate speech and calls to violence however is factually incorrect and therefore your point is wrong. The most popular banned topic of discussion is the validity of the 2020 election, an epistemological question. Very nonviolent and non-hatey figures such as Ron Paul are banned without any stated reasons.
Easier solution: wait until a person who is following Isusr’s strategy weeds you out and bam you have your equally extraordinary match. The only failure states are when Isusr’s strategy doesn’t manage to distinguish the extraordinary people they’re looking for from everyone else, or when you’re not extraordinary.
I think knowing about the actual object level problem here would help in crafting a suitable solution. My main question is why are you informing your friends that you’re at your limit?
Are you participating in some group activity (e.g. going to the gym) that you feel you have to drop out of? If so I strongly recommend just working through the pain until what’s stopping you is no longer pain winning over willpower but physical incapability to proceed. At that point you don’t even need to tell your friends you’re at your limit because no matter what you’re going to flop to the ground unable to continue with the activity. You clearly want to do the group activity, because you haven’t even posited quitting as an option, so rely on your decision to do the group activity and trust that you’re not going to cause any lasting harm to yourself by working through the pain.
If you’re not participating in a group activity (e.g. you had to take off sick from work and you told your friends about it the next day) I see good reasons to not inform your friends that you’re at your limit at all. You know what their expected response is, and you don’t think that expected response is helpful. So might as well just not go through the routine that will give you the bad response.
I don’t understand your usage of the term “hanging a lampshade” in this context. I don’t think either Steve’s or Liron’s behavior in the hypothetical is unrealistic or unreasonable. I have seen similar conversations before. Liron even stated that the Steve was basically him from some time ago. I thought hanging a lampshade is when the fictional scenario is unrealistic or overly coincidental and the author wants to alleviate reader ire by letting them know that he thinks the situation is unlikely as well. Since the situation here isn’t unrealistic, I don’t see the relevance of hanging a lampshade.
If the article should be amended to include pro-”Uber exploits drivers” arguments it should also include contra arguments to maintain parity. Otherwise we have the exact same scenario but in reverse, as including only pro-”Uber exploits drivers” arguments will “automatically [...] generate bad feelings in people who know better the better arguments”. This is why getting into the object-level accuracy of Steve’s claim has negative value. Trying to do so will bloat the article and muddy the waters.
Making an unnecessary and possibly false object-level claim would only hurt the post. It’s irrelevant to Liron’s discussion whether Steve’s claim is right or wrong and getting sidetracked by it’s potential truthfulness would muddy the point.
Eliezer has written extensively on why death is bad for everyone and my understanding closely aligns with his.
This comment leads me to believe that you misunderstand the point of the example. Demonstrating that an arguer doesn’t have a coherent understanding of their claim doesn’t mean that the claim itself is incoherent. It just means that if you argue against that particular person on that particular claim nobody is likely to gain anything out of it. The validity of the example does not correlate to whether “Uber exploits its drivers!” or not.
You agree with Steve in the example and because the example shows Steve being unable to defend his point you don’t like it. You should strive to understand however that Steve’s incoherent defense of his claim has nothing to do with your very coherent reasons for believing the same claim.
I think that the example is strengthened if Steve’s central claim is correct despite the fact that he can’t defend it coherently.
At least, that’s my take. I haven’t read the rest of this sequence yet so I don’t know if Liron explains what you gain out of discovering that somebody’s argument is incoherent. ↩︎
The death positivity movement seems to misunderstand the point that the issue with death isn’t some ancillary result such as people not getting buried in the exact way they desire, but rather that sapient human beings with thoughts, knowledge, memories, and emotions are ceasing to exist forever! Now if DPM thinks that there are issues in the way that death is handled that cause solvable negative externalities (besides people dying) that’s all well and good and probably true. The problem is that they seem to equate solving those minor negative externalities with solving the inherent problem of death itself.
The website’s name, “Order of the Good Death” is oxymoronic. Death is bad. Even if people can die at age 90 in exactly the way they want, have their remains taken care of exactly how they want, and be assured that their decaying body won’t negatively impact the environment, their death is still bad. DPM implies this bizarro world where if they can just solve all these minor issues related to death that somehow the whole process will become good. If you could just take out all the fuss, dirtiness, and other minor negative externalities from torture then that practice could be made “good” as well.
I see no value in this movement and actually quite a bit of harm as it may successfully attract resources that could otherwise be used to solve the fundamental problems of aging and death towards solving non-issues like overcrowded burial sites.
Additionally, tenets 5 and 6 are clear warning signs of intersectional nonsense: “Let’s throw some anti-racist and anti-sexist talking points into our philosophy to latch onto those movements and hopefully they’ll throw some support our way.” The rest of the website is littered with similar intersectional phrases as well. They’re not there to solve any particular issue but to signal to others that the founders of this movement are Right-Minded Thinkers Who Should Be Supported By The Cause. Any movement that isn’t explicitly related to anti-racism or anti-sexism that wastes bandwidth signalling to people that supporters of this movement are also anti-racists and anti-sexists just isn’t practicing effective altruism and is instead virtue-signalling.
You’ve restated the moral in euphemistic terms. Some people do have the idea that they can trust others to give them a fair shake. That’s wrong. You’re right that the couple is behaving unfairly because of their own self-interest and the fact that they can get away with it, but regardless their actions are still unfair.