This phenomenon is also why we have the term “role model.” Successful examples of people similar to us are extremely valuable, and it is in fact very difficult to succeed without such examples.
First, pain in the wrists is often to due to muscle knots (“trigger points”) in the forearm muscles that you may not be aware are there until you go probing for them. There are many online resources for treating such knots if you find them. My advice: don’t overdo it, or you’ll bruise yourself and make it worse.
Second, the main thing you should not do if it is psychosomatic is take pain medication. Your brain and body can easily become psychologically and physiologically dependent on even “benign” drugs like NSAIDs, leading to situations where you’ll be in pain when you don’t take them.
I feel like the “diseases just naturally become not-dangerous” perspective neglects smallpox and other extremely deadly endemic viruses which we have used vaccination to control or eradicate.
I recommend checking the website and joining the Discord as a first step to get in contact with the group.
Between friends I usually wager a sandwich or a cup of coffee. Enough to make it clear that a specific bet is being articulated and agreed upon, but not enough to really hurt anyone’s feelings if they lose.
What is “instrumental intelligence?”
This is an interview I conducted with a college professor friend of mine about how to get the most out of education. I have provided a timecode link to the part where we start talking about college.
Edit: An incomplete tl;dr would be: Don’t go to a large university, go to a small PUI (Primarily Undergraduate Institution) where the focus of the professors will be teaching rather than research and grant-writing. Teaching a course well is a full-time job, but teaching is the third of fourth priority for university professors. The other answers on this post are probably adequate for deciding what to major in.
Yes, thanks, I’ll fix it.
Perhaps relevant, I wrote a post a while back (all the images are broken and irretrievable; sorry) about the idea that suffering and happiness (and possibly many other states) should be considered separate dimensions around which we intuitively try to navigate, and that compressing all these dimensions onto the single dimension of utility gives you some advantages (you can rank things more easily) but discards a tremendous amount of detail. Fundamentally, forcing yourself to use utility in all circumstances is like throwing away your detailed map in exchange for a single number representing how far you are from your destination. In theory you can still find your way home, but you’ll encounter obstacles you might have avoided otherwise.
There are already many good answers in this thread but your question as stated can be answered very simply. First I ask if you can accept that I have within me “a morality.” I have beliefs about right and wrong, and a system for judging actions and outcomes; that’s probably all that’s required to qualify. So that’s one morality that exists, namely mine. It’s not a cosmic, all-pervading, True morality, but it’s a morality, and that’s what you asked for.
I don’t know if the following will appeal to the gut or not, but here goes: people only very rarely make decisions based on morality, or on expected utility, or on any kind of explicit basis for distinguishing the better choice. Most choices have no obvious moral dimension. The only times when someone invokes morality or utilitarianism is when two different parts of their mind want two different things and they need some kind of judgement call from the ref on which option will be more in alignment with all the stuff you want. The reason you don’t commit mass murder has nothing to do with morality. The reason you choose to reduce your meat consumption might have something to do with morality, or might not.
The reason I bring that up is that the existence or nonexistence of ultimate cosmic morality is probably not all that important, practically speaking. You won’t do things that you deeply feel are immoral for the same reason you won’t intentionally smash your hand with a hammer. If you find yourself frequently doing immoral things, you probably don’t really think they’re all that immoral, and/or “immoral” has become a dangerously meaningless symbol in your mind.
Pfizer. First shot caused pain at the injection site that lasted about 24 hours. Second shot led to about twelve hours of what I would call moderately severe flu symptoms. Chills, exhaustion, headache (though one should note that the slightest breeze will give me a headache), brainfog. I basically slept all day, and woke up feeling fine the next morning.
I think we only need to consider pragmatism.
It is useful to be able to ask a butcher for their fish selection and not be shown sea snail, crab, and dolphin.
It is useful to be able to say “I saw a fish!” and have the listener know you mean a fish and not a saltwater crocodile or sea snake.
It’s a useful category to keep distinct.
After reading this article and the Scott/Weyl exchanges, I’m left with the impression that one side is saying: “We should be building bicycles for the mind, not trying to replace human intellect.” And the other side is trying to point out: “There is no firm criteria by which we can label a given piece of technology a bicycle for the mind versus a replacement for human intellect.”
Perhaps uncharitably, it seems like Weyl is saying to us, “See, what you should be doing is working on bicycles for the mind, like this complicated mechanism design thing that I’ve made.” And Scott is sort of saying, “By what measure are you entitle to describe that particular complicated piece of gadgetry a bicycle for the mind, while I am not allowed to call some sort of sci-fi exocortical AI assistant a bicycle for the mind?” And then Weyl, instead of really attempting to provide that distinction, simply lists a bunch of names of other people who had strong opinions about bicycles.
Parenthetically, I’m reminded of the idea from the Dune saga that it wasn’t just AI that was eliminated in the Butlerian Jihad, but rather, the enemy was considered to be the “machine attitude” itself. That is, the attitude that we should even be trying to reduce human labor through automation. The result of this process is a universe locked in feudal stagnation and tyranny for thousands of years. To this day I’m not sure if Herbert intended us to agree that the Butlerian Jihad was a good idea, or to notice that his universe of Mentats and Guild Navigators was also a nightmare dystopia. In any case, the Dune universe has lasguns, spaceships, and personal shields, but no bicycles that I can recall.
Gun ownership requires a license.
In some (many?) states owning a gun does not require a license, but carrying a loaded gun on your person does.
Short answer, yes you can get better at IQ tests by learning the common patterns and practicing the tests. Some people do so, and reach very high IQ scores. But there is essentially no reward for doing this, and thus almost no one bothers to do it. In the absence of practice, IQ is an empirically relatively stable metric which correlates with a number of other empirical outcomes, about as well as anything in the social sciences ever does.
I do this for movies, and formerly did it for books and TV shows, but people mainly try to just pay me to watch anime.
I think the way this could, work, conceptually, is as follows. Maybe the Old Brain does have specific “detectors” for specific events like: are people smiling at me, glaring at me, shouting at me, hitting me; has something that was “mine” been stolen from me; is that cluster of sensations an “agent”; does this hurt, or feel good. These seem to be the kinds of events the small children, most mammals, and even some reptiles seem to be able to understand.
The neocortex then constructs increasingly nuanced models based on these base level events. It builds up a fairly sophisticated cognitive behavior such as, for example, romantic jealousy, or the desire to win a game, or the perception that a specific person is a rival, or a long-term plan to get a college degree, by gradually linking up elements of its learned world model with internal imagined expectations of ending up in states that it natively perceives (with the Old Brain) as good or bad.
Obviously the neocortex isn’t just passively learning, it’s also constantly doing forward-modeling/prediction using its learned model to try to navigate toward desirable states. Imagined instances of burning your hand on a stove are linked with real memories of burning your hand on a stove, and thus imagined plans that would lead to burning your hand on the stove are perceived as undesirable, because the Old Brain knows instinctively (i.e. without needing to learn) that this is a bad outcome.
eta: Not wholly my original thought, but I think one of the main purposes of dreams is to provide large amounts of simulated data aimed at linking up the neocortical model of reality with the Old Brain. The sorts of things that happen in dreams tend to often be very dramatic and scary. I think the sleeping brain is intentionally seeking out parts of the state space that agitate the Old Brain in order to link up the map of the outside world with the inner sense of innate goodness and badness.