This suggests that the interests of university are not well aligned with the goal of spreading education.
Most obviously, there is no incentive to give education to people outside your university. Teaching 200 of your students is strictly better than teaching 190 of your students and 10 000 strangers.
The 32,000 people who signed and gave up are not a problem per se, but if 10 of them are your students, then perhaps you are going to have a problem.
It’s like a university version of the “No Child Left Behind” problem. Preventing one child from being “left behind” is rewarded more than helping hundred children get much further ahead.
Possible solution: A separation of education from the school system.
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this.
Some people seem to believe that about artificial intelligence. (Which will likely be more different from us than spiders are.)
The scarcity of people who can truly learn from what they’re given is why the massive open online courses of the early 2010s didn’t work out, with 95% of enrolled students failing to complete even a single course, and year-on-year student retention rates below 10%.
I am not sure this supports your article’s point. The problem with MOOCs is that most students ignore them. Like, 50% didn’t even start them, and most of the remaining ones just started doing them too close to the deadline, so they obviously didn’t have enough time to complete them. In other words, the problem of studying “at your own pace” is that most people will procrastinate until it’s too late. The traditional university fights procrastination by having you attend the lessons in person at predefined times.
The analogy would be if the main problem with teaching metalworking would be that no one actually opens the metalworking textbook. While your point, if I understand it correctly, is that things such as metalworking are difficult to learn even for those people who actually open the textbook and give it enough time and effort.
Radical transparency doesn’t make any suggestions about what you should say, only that everyone in the organization should be privy to things everyone says. This makes it exceedingly hard for sociopaths to maintain multiple realities.
Seems like it could work, but I wonder what other effects it could have. For example, if someone makes a mistake, you can’t tell them discreetly; the only way to provide a feedback on a minor mistake is to announce it to the entire company.
By the way, are you going to enforce this rule after working hours? What prevents two bad actors from meeting in private and agreeing to pretend having some deniable bias in other to further their selfish goals? Like, some things are measurable, but some things are a matter of subjective judgment, and two people could agree to always have the subjective judgment colored in each other’s favor, and against their mutual enemy. In a way that even if other people notice, you could still insist that what X does simply feels right to you, and what Y does rubs you the wrong way even if you can’t explain why.
Also, people in the company would be exposed to each other, and perhaps the vulnerability would cancel out. But then someone leaves, is no longer part of the company, but still has all the info on the remaining members. Could this info be used against the former colleagues? The former colleagues still have info on the one that left, but not on his new colleagues. Also, if someone strategically joins only for a while, he could take care not to expose himself too much, while everything else would be exposed to him.
the CEO should be willing to take feedback from the new mail clerk.
This assumes the new mail clerk will be a reasonable person. Someone who doesn’t understand the CEO’s situation or loves to create drama could use this opportunity to give the CEO tons of useless feedback. And then complain about hypocrisy when others tell him to slow down.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
The question is, how good are people at introspection: what if the strategies they report are not the strategies they actually use? For example, because they omit the parts that seem unimportant, but that actually make the difference. (Maybe positive or negative thinking is irrelevant, but imagining blue things is crucial.)
Or what if “the thing that brings success” causes the narrative of the cognitive strategy, but merely changing the cognitive strategy will not cause “the thing that brings success”? (People imagining blue things will be driven to succeed in love, and also to think a lot about fluffy kittens. However, thinking about fluffy kittens will not make you imagine blue things, and therefore will not bring you success in love. Even if all people successful in love report thinking about fluffy kittens a lot.)
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
Please note that even things written in 1620 can be under copyright. Not the original thing, but the translation, if it is recent. Generally, every time a book is modified, the clock starts ticking anew… for the modified version. If you use a sufficiently old translation, or translate a sufficiently old text yourself, then it’s okay (even if a newer translation exists, if you didn’t use it).
These days my reason for not using full name is mostly this: I want to keep my professional and private lives separate. And I have to use my real name at job, therefore I don’t use it online.
What I probably should have done many years ago, is make up a new, plausibly-sounding full name (perhaps keep my first name and just make up a new surname?), and use it consistently online. Maybe it’s still not too late; I just don’t have any surname ideas that feel right.
If its at all possible for consciousness to transfer between worlds
I suppose it’s not.
Physics doens’t say how consciousness works.
It exists in brains, brains are made of atoms, and physics has a story or two about the atoms.
I read the first link, and to me it seems that the author actually stumbles upon the right answer in the middle of the paper, only to dismiss it immediately with “we have no good way to justify it” and proceed towards things that make less sense. I am talking about what he calls the “intensity rule” in the paper.
Assuming a non-collapse interpretation, the entire idea is that literally everything happens all the time, because every particle has a non-zero amplitude at every place, but it all adds up to normality anyway, because what matters is the actual value of the amplitude, not just the fact whether it is zero or non-zero. (Theoretically, epsilon is not zero. Practically, the difference between zero and epsilon is epsilon.) Outcomes with larger amplitudes are the normal ones; the ones we should expect more. Outcomes with epsilon amplitudes are the ones we should only pay epsilon attention to.
It is possible that the furniture in my room will, due to some very unlikely synchronized quantum tunneling, transform into a hungry tiger? Yes, it is theoretically possible. (Both in Copenhagen and many-worlds interpretations, by the way.) How much time should I spend contemplating such possibility? Just by mentioning it, I already spent many orders of magnitude more than would be appropriate.
The paper makes some automatic assumption about time, which I am going to ignore for the moment. Let’s assume that, because of quantum immortality, you will be alive 1000000 years from now. Which path is most likely to get you from “here” to “there”?
In any case, some kind of miracle is going to happen. But we should still expect the smallest necessary miracle. In absolute numbers, the chances of “one miracle” and “dozen miracles” are both pretty close to zero, but if we are going to assume that some miracle happened, and normalize the probabilities accordingly, “one miracle” is almost certainly what happened, and the probability of “dozen miracles” remains pretty close to zero even after the normalization. (Assuming the miracles are of comparable size, mutually independent, et cetera.)
Comparing likelihoods of different miracles is, by definition, outside of our usual experience, so I may be wrong here. But it seems to me that the horror scenario envisioned by the author requires too many miracles. (In other words, it seems optimized for shock value, not relative probability.) Suppose that in 10 years you get hit by the train, and by a miracle, a horribly disfigured fragment of you survives in an agony beyond imagination. Okay, technically possible. So, what is going to happen during the following 999990 years? It seems that further surviving in this state would require more miracles than further surviving as a healthy person. (The closer to death you are, the more unlikely it is for you to survive another day, or year.) And both these paths seem to require more miracles than being frozen now, and later resurrected and made forever young using advanced futuristic technology. Even just dying now, and being resurrected 1000000 years later, would require only one miracle, albeit a large one. If you are going to be alive in 1000000 years, you are most likely to get there by a relatively least miraculous path. I am not sure what exactly it is, but being constantly on the verge of death and surviving anyway seems too unlikely (and being frozen and later unfrozen, or uploaded to a computer, seems almost ordinary in comparison).
Now, let’s take a bit more timeless perspective here. Let’s look at the universe in its entirety. According to quantum immortality, there are you-moments in the arbitrarily distant future. Yes; but most of them are extremely thin. Most of the mass of the you-moments is here, plus or minus a few decades. (Unless there is a lawful process, such as cryonics, that would stretch a part of the mass into the future enough to change the distribution significantly. Still not as far as quantum immortality, which can probably overcome even the death heat of the universe and get so far that the time itself stops making sense.) So, according to anthropic principle, whenever you find yourself existing, you most likely find yourself in the now—I mean, in your ordinary human lifespan. (Which is, coincidentally, where you happen to find yourself right now, don’t you?) There are a few you-moments at a very exotic places, but most of them are here. Most of your life happens before your death; most instances of you experiencing yourself are the boring human experience.
From certain perspective, “more models” becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment. Especially when multiple models, all of them “false but useful”, would suggest taking a different action.
As an analogy, it’s like saying that your artificial intelligence will be an artificial meta-intelligence, because instead of following one algorithm, as other artificial intelligences do, it will choose between multiple algorithms. At the end of the day, “if P1 then A1 else if P2 then A2 else A3” still remains one algorithm. So the actual question is not whether one algorithm or many algorithms is better, but whether having a big if-switch at the top level is the optimal architecture. (Dunno, maybe it is, but from this perspective it suddenly feels much less “meta” than advertised.)
I recently started playing an online game I saw advertised online. I know how addictive these things are, but I decided to “play with fire” anyway.
As a precaution, I decided to not make a browser bookmark of this game, ever. I registered using a throwaway e-mail address. Also, I never told anyone that I was playing it. That way, when I decide to quit, nothing would push me back—it would only require one decision, not repeated temptations and decisions. And… I played for a few weeks and then I quit. And after a few days of not playing, I don’t feel like starting it again anymore, so I guess my strategy worked.
I will not mention the name of the game here. Anyway, it was the type of game where you build stuff, collect resources, and research new stuff; with many things to unlock. In the game there were three important resources, let’s call them X, Y, and Z. By making better or worse decisions, you could make more or less of the resources X and Y; and I spent some time optimizing for that.
With resource Z, however, the basic way to get it was to play the game regularly. If you logged in at least N times a day, you got M points of resource Z per day; you couldn’t get more for playing longer, but you would get less for taking breaks longer than 1/N of the day. In addition to this, there were also some other ways to get resource Z, but this extra amount was always smaller than the amount you got for merely playing the game regularly. There was no smart strategy to at least double the income of Z. So, whether you did smart or stupid things had a visible impact on X and Y, but almost no impact on Z.
Of course the resource Z was the one that actually mattered, in long term. Your progress on the tech tree sometimes required X and Y, but always required Z. And, of course, the higher steps on the almost-linear tech tree required more of the resource Z.
So, regardless of whether you did smart or stupid things, you advanced in the game at a pre-programmed speed, which was gradually getting slower the longer you played. In other words, pre-programmed fun at the beginning (unlocking a lot of stuff during the first day, trying various things), pre-programmed increasing boredom later. Completely unsurprisingly, resource Z was the one you could also buy for real money. But even if you would decide to spend a certain amount of money every week, you would still get the same boredom curve as a result, as the constant income of resource Z would have diminishing returns the further you progress on the tech tree. The only way to keep constant levels of fun (assuming that unlocking new things on the tech tree counts as fun, even if they are mostly the same stuff only with different numbers and pictures) would be to pay ever increasing amounts of money.
After realizing all this, I still kept playing for a few days before I finally stopped. (I never paid anything, of course.)
Seems to me that modern life full of distractions. As a smart person, you probably have a work that requires thinking (not just moving your muscles in a repetitive way). In your free time there is internet with all the websites optimized for addictiveness. Plus all the other things you want to do (books to read, movies to see, friends to visit). Electricity can turn your late night into a day; you can take a book or a smartphone everywhere.
So, unless we choose it consciously, there are no silent moments, to get in contact with yourself… or whatever higher power you imagine there to be, talking to you.
I wonder what is the effect ratio between meditation and simply taking a break and wondering about stuff. Maybe it’s our productivity-focused thinking saying that meditating (doing some hard work in order to gain supernatural powers) is a worthy endeavor, while goofing off is a sin.
In real world, people usually forget what you said 10 years ago. And even if they don’t, saying “Matthew said this 10 years ago” doesn’t have the same power as you saying the thing now.
But the internet remembers forever, and your words from 10 years ago can be retweeted and become alive as if you said them now.
A possible solution would be to use a nickname… and whenever you notice you grew up so much that you no longer identify with the words of your nickname, pick up a new one. Also new accounts on social networks, and re-friend only those people you still consider worthy. Well, in this case the abrupt change would be the unnatural thing, but perhaps you could still keep using your previous account for some time, but mostly passively. As your real-life new self would have different opinions, different hobbies, and different friends than your self from 10 years ago, so would your online self.
Unfortunately, this solution goes against “terms of service” of almost all major website. On the advertisement-driven web, advertisers want to know your history, and they are the real customers… you are only a product.
Is “knowledge transference” a real thing, or one of those thousand things that didn’t replicate? There are many myths in education, I wonder if this is one of them.
(I tried Wikipedia, but it only has an article on “knowledge transfer”, which is about sharing information between people within an organization, i.e. something completely different.)
Bryan Caplan in The Case Against Education writes:
[Teachers say:] A history class can teach critical thinking; a science class can teach logic. Thinking—all thinking—builds mental muscles. The bigger students’ mental muscles, the better they’ll be at whatever job they eventually land.
[Is it true?] For the most part, no. Educational psychologists who specialize in “transfer of learning” have measured the hidden intellectual benefits of education for over a century. Their chief discovery: education is narrow. As a rule, students learn only the material you specifically teach them . . . if you’re lucky. In the words of educational psychologists Perkins and Salomon, “Besides just plain forgetting, people commonly fail to marshal what they know effectively in situations outside the classroom or in other classes in different disciplines. The bridge from school to beyond or from this subject to that other is a bridge too far.”
Many experiments study transfer of learning under seemingly ideal conditions. Researchers teach subjects how to answer Question A. Then they immediately ask their subjects Question B, which can be handily solved using the same approach as Question A. Unless A and B look alike on the surface, or subjects get a heavy-handed hint to apply the same approach, learning how to solve Question A rarely helps subjects answer Question B.
[In an experiment when subjects are told a military puzzle and its solution, and then a medical puzzle which can be solved analogically,] A typical success rate is 30%. Since about 10% of subjects who don’t hear the military problem offer the convergence solution, only one in five subjects transferred what they learned. To reach a high (roughly 75%) success rate, you need to teach subjects the first story, then bluntly tell them to use the first story to solve the second.
To repeat, such experiments measure how humans “learn how to think” under ideal conditions: teach A, immediately ask B, then see if subjects use A to solve B. Researchers are leading the witness. As psychologist Douglas Detterman remarks: “Teaching the principle in close association with testing transfer is not very different from telling subjects that they should use the principle just taught. Telling subjects to use a principle is not transfer. It is following instructions.”
Under less promising conditions, transfer is predictably even worse. Making the surface features of A and B less similar impedes transfer. Adding a time delay between teaching A and testing B impedes transfer. Teaching A, then teaching an irrelevant distracter problem, then testing B, impedes transfer. Teaching A in a classroom, then testing B in the real world impedes transfer. Having one person teach A and another person test B impedes transfer.
[...] No wonder even transfer optimists like Robert Haskell lament: “Despite the importance of transfer of learning, research findings over the past nine decades clearly show that as individuals, and as educational institutions, we have failed to achieve transfer of learning on any significant level.”
[...] Counterexamples do exist, but compared to teachers’ high hopes, effects are modest, narrow, and often only in one direction. One experiment randomly taught one of two structurally equivalent topics: (a) the algebra of arithmetic progression, or (b) the physics of constant acceleration. Researchers then asked algebra students to solve the physics problems, and physics students to solve the algebra problems. Only 10% of the physics students used what they learned to solve the algebra problems. But a remarkable 72% of the algebra students used what they learned to solve the physics problems. Applying abstract math to concrete physics comes much more naturally than generalizing from concrete physics to abstract math.
[...] Each major sharply improved on precisely one subtest. Social science and psychology majors became much better at statistical reasoning—the ability to apply “the law of large numbers and the regression or base rate principles” to both “scientific and everyday-life contexts.” Natural science and humanities majors became much better at conditional reasoning—the ability to correctly analyze “if . . . then” and “if and only if” problems. On remaining subtests, however, gains after three and half years of college were modest or nonexistent.
[...] Transfer researchers usually begin their careers as idealists. Before studying educational psychology, they take their power to “teach students how to think” for granted. When they discover the professional consensus against transfer, they think they can overturn it. Eventually, though, young researchers grow sadder and wiser. The scientific evidence wears them down—and their firsthand experience as educators finishes the job
Intuitively, it seems to me that having a good model of world trained on some subjects should provide some advantage at other subjects. But either it is an obvious prerequisite (such as: understanding chemistry helps you understand biochemistry) or the benefits are likely to be small (e.g. from physics I could learn that the universe follows relatively simple impersonal laws; but that alone does not tell me which laws are followed in sociology or computer science). Having good general knowledge can inoculate one against some fake theories (e.g. physics and chemistry against homeopathy), but after removing the fake frameworks there is still much to learn. Also, the transferred knowledge (e.g. “there is no supernatural, the nature follow impersonal laws”) is the same for all natural sciences, so the “X%” you get from physics is the same as the “X%” you get from chemistry; you do not get “2X%” after learning both of them.
Generally, if you want to go outside of your comfort zone, you might as well do something useful (either for yourself, or for others).
For example, if you try “rejection therapy” (approaching random people, getting rejected, and thus teaching your System 1 that being rejected doesn’t actually hurt you), you could approach people with something specific, like giving them fliers, or trying to sell something. You may make some money as a side effect, and in addition to expanding your comfort zone also get some potentially useful job experience. If you travel across difficult terrain, you could also transport some cargo and get paid for it. If you volunteer for an organization, you will get some advice and support (the goal is to do something unusual and uncomfortable, not to optimize for failure), and you will get interesting contacts (your LinkedIn profile will be like: “endorsed for skills: C++, object-oriented development, brain surgery, fire extinguishing, assassination, cooking for homeless”).
You could start by obtaining a list of non-governmental organizations in your neighborhood, calling them, and asking whether they need a temporary volunteer. (Depending on your current comfort zone, this first step may already be outside of it.)
If someone tried to implement this in real life, I would expect it to get implemented exactly halfway. I would expect to find out that my life became perfectly transparent for anyone who cares, but there would be some nice-sounding reason why the people at the top of the food chain would retain their privacy. (National security. Or there are a few private islands in the ocean where the surveillance is allegedly economically/technically impossible to install, and by sheer coincidence, the truly important people live there.) I would also expect this asymmetry to be abused against people who try to organize to remove it.
You know, just like those cops wearing body cams that mysteriously stop functioning exactly at the moment the recording could be used against them. That, but on a planetary scale.
From the opposite perspective, many people would immediately think about counter-measures. Secret languages; so that you can listen to me talking to my friends, but still have no idea what was the topic. This wouldn’t scale well, but some powerful and well-organized groups would use it.
People would learn to be more indirect in their speech, to allow everyone to pretend that anything was a coincidence or misunderstanding. There would be a lot of guessing, and people on the autism spectrum would be at a serious disadvantage.
How would the observed data be evaluated? People are hypocrites; just because you are doing the same thing many other people are doing, and everyone can see it, it doesn’t necessarily prevent the outcome where you get punished and those other people not. People are really good at being dumb when you provide them evidence they don’t want to see. Not understanding things you can clearly see would become even more important social skill. There would still be taboos, and you would not be able to talk about them; not even in privacy, because that wouldn’t exist anymore.
But for the people who believe this would be great… I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?
How specifically would you do better than status quo?
I could easily dismiss some charities for causes I don’t care about, or where I think they do more harm than good. Now there are still many charities left whose cause I approve of, and that seems to me like they could help. How do I choose among these? They publish some reports, but are the numbers there the important ones, or just the ones that are easiest to calculate?
For example, I don’t care if your “administrative overhead” is 40%, if that allows you to spend the remaining 60% ten times more effectively than a comparable charity with smaller overhead. Unfortunately, the administrative overhead will most likely be included in the report, with two decimal places; but the achieved results will be either something nebulous (e.g. “we make the world a better place” or “we help kids become smarter”), or they will describe the costs, not the outcomes (e.g. “we spent 10 millions to save the rainforest” or “we spent 5 milions to teach kids the importance of critical thinking”).
Now, I don’t have time and skills to become a full-time charity researcher. So if I want to donate well, I need someone who does the research for me, and whose integrity and sanity I can trust.
What kills you doesn’t make you stronger. You want to get out of your comfort zone, not out of your survival zone.