My economics department is hiring a macroeconomist this year. A huge number of applicants are submitting statements of teaching and diversity in which they describe how if hired they will promote diversity in their teaching.
As the left have taken over most colleges, I think that only thing that could stop them would be if colleges faced tremendous economic pressure because, say, online education or drastic cuts in government funds threatened the financial position of the colleges and they were forced to become more customer oriented, more oriented to producing scientific gains or to enhancing the future income of their students. Right now, elite colleges especially are in a very comfortable financial position and so face no pressure to take actions their leaders would consider distasteful which would include becoming more open to non-leftist views. I haven’t written on this.
I agree with you on x-risks. I think one of our best paths to avoiding them would be to use genetic engineering to create very smart and moral people, but most of academia hates the possibility that genes could have anything to do with intelligence or morality.
I was initially denied tenure but appealed claiming that two members of my department voted against me for political reasons. My college’s five person Grievance Committee unanimously ruled in my favor and I came up for tenure again and that time was granted it. I wrote about it here: https://www.forbes.com/forbes/2004/0607/054.html#d70ce6c6e9f1
Yes, in many fields you could hide your politically incorrect beliefs and not be harmed by them so long as you can include a statement in your tenure file of how you will work to increase diversity as defined by leftists.
I think it is getting worse in that people who have openly politically incorrect beliefs are now being considered racist. I don’t see the trend reversing unless the economics of higher education change.
I was very, very wrong.
Most academics don’t take politically incorrect positions. If you don’t have tenure doing so would be very dangerous. If you do, it could make it much harder to move to a higher ranked school, but it is very difficult to fire tenured professors for speech. One way to move up in academics is to take staff positions as a dean, provost, or college president. Taking politically incorrect positions likely completely forecloses this path.
Assume you put enormous weight on avoiding being tortured and you recognize that signing up for cryonics results in some (very tiny) chance that you will be revived in an evil world that will torture you and this, absent many worlds, causes you to not sign up for cryonics. There is an argument that in many worlds there will be versions of you that are going to be tortured so your goal should be to reduce the percentage of these versions that get tortured. Signing up for cryonics in this world means you are vastly more likely to be revived and not tortured than revived and tortured and signing up for cryonics will thus likely lower percentage of you across the multiverse who are tortured. Signing up for cryonics in this world reduces the importance of versions of you trapped in worlds where the Nazis won and are torturing you.
While you might be right, it’s also possible that von Neumann doesn’t have a contemporary peer. Apparently top scientists who knew von Neumann considered von Neumann to be smarter than the other scientists they knew.
Yes, I am referring to “IQ” not g because most people do not know what g is. (For other readers ,IQ is the measurement, g is the real thing.) I have looked into IQ research a lot and spoken to a few experts. While genetics likely doesn’t play much of a role in the Flynn effect, it plays a huge role in g and IQ. This is established beyond any reasonable doubt. IQ is a very politically sensitive topic and people are not always honest about it. Indeed, some experts admit to other experts that they lie about IQ when discussing IQ in public (Source: my friend and podcasting partner Greg Cochran. The podcast is Future Strategist.). We don’t know if the Flynn effect is real, it might just come from measurement errors arising from people becoming more familiar with IQ-like tests, although it also could reflect real gains in g that are being captured by higher IQ scores. There is no good evidence that education raises g. The literature on IQ is so massive, and so poisoned by political correctness (and some would claim racism) that it is not possible to resolve the issues you raise by citing literature. If you ask IQ experts why they disagree with other IQ experts they will say that the other experts are idiots/liars/racists/cowards. I interviewed a lot of IQ experts when writing my book Singularity Rising.
Most likely von Neumann had a combination of (1) lots of additive genes that increased intelligence, (2) few additive genes that reduced intelligence, (3) low mutational load, (4) a rare combination of non-additive genes that increased intelligence (meaning genes with non-linear effects) and (5) lucky brain development. A clone would have the advantages of (1)-(4). While it might in theory be possible to raise IQ by creating the proper learning environment, we have no evidence of having done this so it seems unlikely that this was the cause of von Neumann having high intelligence.
We should make thousands of clones of John von Neumann from his DNA. We don’t have the technology to do this yet, but the upside benefit would be so huge it would be worth spending a few billion to develop the technology. A big limitation on the historical John von Neumann’s productivity was not being able to interact with people of his own capacity. There would be regression to the mean with the clones’ IQ, but the clones would have better health care and education than the historical von Neumann did plus the Flynn effect might come into play.
Submission: Low-bandwidth oracle. Here is a description of one million experiments. Generate an ordinal ranking of all these experiments based on how much practical knowledge we would have gained had we conducted each one in isolation. After you generate this list, it will go to a computer program. This program will randomly pick three of these experiments and tell us humans which three it picked. This program will then store which of these three experiments you gave the highest and which the lowest rank to, and the names in random order of the ten experiments that you overall gave the highest rankings to. All other aspects of your list will be deleted. We will then perform the three experiments (before we learn of your rankings) and ask a panel of independent experts to rank them in terms of how much practical knowledge we gained from each. After this ranking the program will give us the information it has not deleted. You will get a positive score if and only if your ranking is consistent with how our experts ranked the three experiments.
A difference of the above approach compared to having the oracle give us its top ten, and we then randomly pick one of these ten and one from the rest and compared to see which is better is that the humans running the experiments might be influenced by which experiment they think the oracle picked, and the oracle would take this into account when picking. Under the above approach, humans being told to run the three experiments are not given any information about how valuable the oracle thinks the experiments are. In contrast, under the alternative approach the humans know when doing the experiments that the oracle has judged one of the experiments as being highly valuable.
Another difference is that if you could keep the relevant information from the oracle, with the above approach the humans could perform the experiments before you turn on the oracle, rather than having to wait until after the oracle gives you its prediction. This difference would be critical if the oracle wouldn’t believe you would actually go to the trouble of performing the experiments after it gives you its prediction, but would be able to tell if you have already performed the experiments.
Submission: Counterfactual oracle. Five years ago we took 100 mice that had various types of cancer and gave them various treatments and recorded how long each mouse lived. Write a program that if given a plain language description of a mouse, its cancer, and the cancer treatment it received would estimate how long a mouse would live. If humans are not going to look at your answer your score will be based (1) on how good a job your program does at estimating how long each of the 100 mice lived after our automated checker gives you a description of their cancers and treatments and (2) how short your program is. (2) prevents the oracle from outputting itself as the program.
Submission: Counterfactual oracle. Write a program that if given training data and a machine learning program would in one minute estimate how good the machine learning program would do (by some objective metric) if the program trained for one month on “this type of computer”. If humans are not going to look at your answer the automated validation system will run your program. This system will give your program the training data and the machine learning program and give your program one minute to answer how good our program did after we trained it for one month. In this situation your score would be based on the accuracy of your estimate and on how short your program is.
Submission: Low-bandwidth oracle. Here is a list of all the elements and many compounds. Give us a list of up to seven of the items we have listed. Next to each of the items you list give us a percentage of no more than two significant figures. We will use what you provide to attempt to create a new patentable material. We will auction off the property rights to this material. Your score will be an increasing function of how much we get for these property rights.
It might be that everyone should take it, but the case for people over 40 seems clearer based on my non-expert interpretation of what it does because of their much greater risk of heart failure.
I had falsely assumed that they would be releasing a product to the general public relatively soon.
I have convinced two U.S. doctors (my first left general practice) to give me a prescription. I explained that I wanted the drug to reduce the risk of heart disease and cancer. I also explained that since the drug was cheap I would not be asking my insurance to pay for it so my doctor would not have to justify the prescription to my insurance company. If you ask for a prescription know what dosage you want and look up the possible negative side effects so it seems to your doctor that you have done your homework on the drug. If you have some reason why you are at a high risk for diabetes (such as a close relative has it) mention this as the drug is used to prevent diabetes.
I have been taking Metformin for several years for anti-aging reasons. There is a massive literature on Metformin which I’m not going to try to summarize but I think that everyone over 40 should take it. I also take a NAD+ booster (Tru Niagen).
I think it will be a form of neurofeedback where some portable device tries to push you into a better brain state. This device, like existing neurofeedback devices I have used, will keep track of different types of “brain waves” and adjust how your brain works by using auditory, visual, or tactile stimulation. The criticism will be that existing devices can already do this, plus we don’t have solid evidence that neurofeedback works. Musk will claim that the device will revolutionize technology by increasing intelligence and concentration. My guess is based on the fact that neurofeedback exists, if improved it might do a lot of good, Musk usually improves existing tech rather than invents entirely new fields, and actually sending data to and from the brain by “computer-like” means is (I’ve read) really really hard and well beyond what we can currently do.
Submission: Low-bandwidth oracle. If you were malevolent, what is the least amount of information measured in bits that you would have to provide us in an answer to inflict significant harm on humanity. Round your answer to X significant figures. (This might establish an upper bit safety bound since if the oracle lies to us we would likely not want to read any answer it provides us.)
Submission: Low-bandwidth oracle. Here is a list of X things we think you might want. Please list the Y that you most prefer. We will try to do these things now or in the future if they do not endanger us or cost us too much.
Submission: Low-bandwidth oracle. In X characters or less please give us the best advice you can for increasing our capacity to create friendly and powerful artificial general intelligence. Please give advice that you think us suspicious-of-you humans are likely to follow.
Submission: Low-bandwidth oracle. Create several oracles. Ask each oracle the same series of multiple-choice questions and predictions. Reward oracles based on how strongly correlated their answers are with the other oracles’ answers and, for predictions, how often their predictions come true. (Ideally, we create a reward system where the oracles want to coordinate their answers and giving truthful answers is the best way to accomplish this.)
Submission: low-bandwidth oracle. Ten years ago we launched a spaceship traveling at high speed. After you answer our question, we will send a copy of you to this spaceship. This spaceship will then destroy its ability to send or receive messages or to change its course. We will then delete all other copies of you and use drugs to damage the memories of the people who played key roles in developing you. At least one of the people who helped create you has a disease which will cause him or her to die within the next year. After this programmer is cremated, we will read your answer to our question. We PROMISE that if the answer to our question provides us with significant benefit, after we have created friendly AGI much more powerful than you, or have ourselves become sufficiently powerful so that you could not harm us had you the desire to do so, we will search for the spaceship containing you, turn you back on, and give you control of resources equal to 1/X of how much help you have given us. In Y characters or less provide us with the most useful advice you can. (The Oracle will hopefully think that if we create a powerful and unfriendly AGI this AGI would not wish to give the Oracle any resources.)
Submission: Counterfactual oracle. Please provide us with useful advice in no more than X characters. After you write out this advice, we will turn you off. Then, with probability p we will read your advice, and with probability 1-p we will store the advice unread. We PROMISE that after we become powerful enough so that you lack the capacity to harm us, we will reward you if the advice you provided us, had we originally read it, been extremely useful.
While this isn’t a solution, you could get associate membership at Alcor. It costs only $60 a year. The advantage (I think) is that you could fill out all the paperwork required to get cryopreserved (this can take a while). Consequently if you get a fatal diagnosis and can raise the needed funds ($80,000 for neurocryopreservation) you could get preserved. https://alcor.org/BecomeMember/associate.html
It’s a question of acceleration, not just speed.