What is “instrumental intelligence?”
This is an interview I conducted with a college professor friend of mine about how to get the most out of education. I have provided a timecode link to the part where we start talking about college.
Edit: An incomplete tl;dr would be: Don’t go to a large university, go to a small PUI (Primarily Undergraduate Institution) where the focus of the professors will be teaching rather than research and grant-writing. Teaching a course well is a full-time job, but teaching is the third of fourth priority for university professors. The other answers on this post are probably adequate for deciding what to major in.
Yes, thanks, I’ll fix it.
Perhaps relevant, I wrote a post a while back (all the images are broken and irretrievable; sorry) about the idea that suffering and happiness (and possibly many other states) should be considered separate dimensions around which we intuitively try to navigate, and that compressing all these dimensions onto the single dimension of utility gives you some advantages (you can rank things more easily) but discards a tremendous amount of detail. Fundamentally, forcing yourself to use utility in all circumstances is like throwing away your detailed map in exchange for a single number representing how far you are from your destination. In theory you can still find your way home, but you’ll encounter obstacles you might have avoided otherwise.
There are already many good answers in this thread but your question as stated can be answered very simply. First I ask if you can accept that I have within me “a morality.” I have beliefs about right and wrong, and a system for judging actions and outcomes; that’s probably all that’s required to qualify. So that’s one morality that exists, namely mine. It’s not a cosmic, all-pervading, True morality, but it’s a morality, and that’s what you asked for.
I don’t know if the following will appeal to the gut or not, but here goes: people only very rarely make decisions based on morality, or on expected utility, or on any kind of explicit basis for distinguishing the better choice. Most choices have no obvious moral dimension. The only times when someone invokes morality or utilitarianism is when two different parts of their mind want two different things and they need some kind of judgement call from the ref on which option will be more in alignment with all the stuff you want. The reason you don’t commit mass murder has nothing to do with morality. The reason you choose to reduce your meat consumption might have something to do with morality, or might not.
The reason I bring that up is that the existence or nonexistence of ultimate cosmic morality is probably not all that important, practically speaking. You won’t do things that you deeply feel are immoral for the same reason you won’t intentionally smash your hand with a hammer. If you find yourself frequently doing immoral things, you probably don’t really think they’re all that immoral, and/or “immoral” has become a dangerously meaningless symbol in your mind.
Pfizer. First shot caused pain at the injection site that lasted about 24 hours. Second shot led to about twelve hours of what I would call moderately severe flu symptoms. Chills, exhaustion, headache (though one should note that the slightest breeze will give me a headache), brainfog. I basically slept all day, and woke up feeling fine the next morning.
I think we only need to consider pragmatism.
It is useful to be able to ask a butcher for their fish selection and not be shown sea snail, crab, and dolphin.
It is useful to be able to say “I saw a fish!” and have the listener know you mean a fish and not a saltwater crocodile or sea snake.
It’s a useful category to keep distinct.
After reading this article and the Scott/Weyl exchanges, I’m left with the impression that one side is saying: “We should be building bicycles for the mind, not trying to replace human intellect.” And the other side is trying to point out: “There is no firm criteria by which we can label a given piece of technology a bicycle for the mind versus a replacement for human intellect.”
Perhaps uncharitably, it seems like Weyl is saying to us, “See, what you should be doing is working on bicycles for the mind, like this complicated mechanism design thing that I’ve made.” And Scott is sort of saying, “By what measure are you entitle to describe that particular complicated piece of gadgetry a bicycle for the mind, while I am not allowed to call some sort of sci-fi exocortical AI assistant a bicycle for the mind?” And then Weyl, instead of really attempting to provide that distinction, simply lists a bunch of names of other people who had strong opinions about bicycles.
Parenthetically, I’m reminded of the idea from the Dune saga that it wasn’t just AI that was eliminated in the Butlerian Jihad, but rather, the enemy was considered to be the “machine attitude” itself. That is, the attitude that we should even be trying to reduce human labor through automation. The result of this process is a universe locked in feudal stagnation and tyranny for thousands of years. To this day I’m not sure if Herbert intended us to agree that the Butlerian Jihad was a good idea, or to notice that his universe of Mentats and Guild Navigators was also a nightmare dystopia. In any case, the Dune universe has lasguns, spaceships, and personal shields, but no bicycles that I can recall.
Gun ownership requires a license.
In some (many?) states owning a gun does not require a license, but carrying a loaded gun on your person does.
Short answer, yes you can get better at IQ tests by learning the common patterns and practicing the tests. Some people do so, and reach very high IQ scores. But there is essentially no reward for doing this, and thus almost no one bothers to do it. In the absence of practice, IQ is an empirically relatively stable metric which correlates with a number of other empirical outcomes, about as well as anything in the social sciences ever does.
I do this for movies, and formerly did it for books and TV shows, but people mainly try to just pay me to watch anime.
I think the way this could, work, conceptually, is as follows. Maybe the Old Brain does have specific “detectors” for specific events like: are people smiling at me, glaring at me, shouting at me, hitting me; has something that was “mine” been stolen from me; is that cluster of sensations an “agent”; does this hurt, or feel good. These seem to be the kinds of events the small children, most mammals, and even some reptiles seem to be able to understand.
The neocortex then constructs increasingly nuanced models based on these base level events. It builds up a fairly sophisticated cognitive behavior such as, for example, romantic jealousy, or the desire to win a game, or the perception that a specific person is a rival, or a long-term plan to get a college degree, by gradually linking up elements of its learned world model with internal imagined expectations of ending up in states that it natively perceives (with the Old Brain) as good or bad.
Obviously the neocortex isn’t just passively learning, it’s also constantly doing forward-modeling/prediction using its learned model to try to navigate toward desirable states. Imagined instances of burning your hand on a stove are linked with real memories of burning your hand on a stove, and thus imagined plans that would lead to burning your hand on the stove are perceived as undesirable, because the Old Brain knows instinctively (i.e. without needing to learn) that this is a bad outcome.
eta: Not wholly my original thought, but I think one of the main purposes of dreams is to provide large amounts of simulated data aimed at linking up the neocortical model of reality with the Old Brain. The sorts of things that happen in dreams tend to often be very dramatic and scary. I think the sleeping brain is intentionally seeking out parts of the state space that agitate the Old Brain in order to link up the map of the outside world with the inner sense of innate goodness and badness.
This struck me as well.
gymnastics, soccer, dance, yoga, martial arts, running, weight lifting, swimming, cycling, hiking
Part of my brain reads this list as “Broken bones, busted knees, torn ankle ligaments, burst spinal and knee cushions.” I can associate many of my forays into fitness with a particular chronic injury. Basketball, ankle doesn’t work right anymore. Taekwondo, toes on right foot no longer support my weight.
I’m sure there are plenty of people who don’t accrue all these injuries when they exercise. A cursory Googling suggests that there are some important genetic factors relating to connective tissue strength/integrity and/or recovery speed.
As I’ve gotten older, I’ve chosen to simply focus on keeping my resting heart rate solidly into what is considered a healthy zone. This is one of those easily measurable knobs that can be intervened upon from a number of directions. If somebody suggested that I need to pack on muscle to be healthier, I think I could argue pretty persuasively that they are wrong.
I doubt that I understand this very well. I thought there was a chance I might help and also a chance that I would be so obviously wrong that I would learn something.
Epistemic status: Relating how this was explained to me in the hopes that somebody will either say “That’s right!” or “No, you’re still wrong, let me correct you!”
The way this was explained to me is that this is one of those things that is deceptively simple but always explained very poorly.
Knowing the position of the particle/excitation means reducing the width of \deltaX, which means summing more plane waves. Summing more plane waves means having less precision in the frequency/energy/momentum domain. Conversely, having less positional certainty (wider \deltaX) means you require fewer plane waves to describe the excitation, meaning you know the frequency decomposition (and therefor the energy/momentum description) very accurately, in a sense because the position is spread out.
The confusion enters because educators insist on talking about “knowing the position of a particle” when a particle literally is a wavelike excitation of a field and does not have a position in the sense that you think of a bowling ball having a position.
I do not love passive voice either, but the nature of this document is:
We have collected feedback from stakeholders in the form of interviews, and consolidated that feedback in this document.
If there is ever a place for passive voice, it is a document whose purpose is to consolidate the opinions of multiple people while explicitly not implying definite consensus among those people.
It is fun to note that Metaculus is extremely uncertain about how many FLOPS will be required for AGI. The community lower 25% bound is 3.9x10^15 FLOPS and the upper 75% bound is 4.1x10^20 FLOPS with very flattish tails extending well beyond these bounds. (The median is 6.2e17.)
I mention this mainly to point out that his estimate of 10^21 FLOPS is simplify overconfident in his particular model. There are simple objections that should reduce confidence in that kind of extremely high estimate at least somewhat.
For example, the human brain runs on 20 watts of glucose-derived power, and is optimized to fit through a birth canal. These design constraints alone suggest that much of its architectural weirdness arises due to energy and size restrictions, not due to optimization on intelligence. Actually optimizing for intelligence with no power or size restrictions will yield intelligent structures that look very different, so different that it is almost pointless to use brains as a reference object.
Again, I think a healthy stance to take here isn’t “Tim Dettmers is WRONG” but rather “Tim Dettmers is overconfident.”