Submission: Low-bandwidth oracle. Here is a description of one million experiments. Generate an ordinal ranking of all these experiments based on how much practical knowledge we would have gained had we conducted each one in isolation. After you generate this list, it will go to a computer program. This program will randomly pick three of these experiments and tell us humans which three it picked. This program will then store which of these three experiments you gave the highest and which the lowest rank to, and the names in random order of the ten experiments that you overall gave the highest rankings to. All other aspects of your list will be deleted. We will then perform the three experiments (before we learn of your rankings) and ask a panel of independent experts to rank them in terms of how much practical knowledge we gained from each. After this ranking the program will give us the information it has not deleted. You will get a positive score if and only if your ranking is consistent with how our experts ranked the three experiments.
A difference of the above approach compared to having the oracle give us its top ten, and we then randomly pick one of these ten and one from the rest and compared to see which is better is that the humans running the experiments might be influenced by which experiment they think the oracle picked, and the oracle would take this into account when picking. Under the above approach, humans being told to run the three experiments are not given any information about how valuable the oracle thinks the experiments are. In contrast, under the alternative approach the humans know when doing the experiments that the oracle has judged one of the experiments as being highly valuable.
Another difference is that if you could keep the relevant information from the oracle, with the above approach the humans could perform the experiments before you turn on the oracle, rather than having to wait until after the oracle gives you its prediction. This difference would be critical if the oracle wouldn’t believe you would actually go to the trouble of performing the experiments after it gives you its prediction, but would be able to tell if you have already performed the experiments.
Submission: Counterfactual oracle. Five years ago we took 100 mice that had various types of cancer and gave them various treatments and recorded how long each mouse lived. Write a program that if given a plain language description of a mouse, its cancer, and the cancer treatment it received would estimate how long a mouse would live. If humans are not going to look at your answer your score will be based (1) on how good a job your program does at estimating how long each of the 100 mice lived after our automated checker gives you a description of their cancers and treatments and (2) how short your program is. (2) prevents the oracle from outputting itself as the program.
Submission: Counterfactual oracle. Write a program that if given training data and a machine learning program would in one minute estimate how good the machine learning program would do (by some objective metric) if the program trained for one month on “this type of computer”. If humans are not going to look at your answer the automated validation system will run your program. This system will give your program the training data and the machine learning program and give your program one minute to answer how good our program did after we trained it for one month. In this situation your score would be based on the accuracy of your estimate and on how short your program is.
Submission: Low-bandwidth oracle. Here is a list of all the elements and many compounds. Give us a list of up to seven of the items we have listed. Next to each of the items you list give us a percentage of no more than two significant figures. We will use what you provide to attempt to create a new patentable material. We will auction off the property rights to this material. Your score will be an increasing function of how much we get for these property rights.
It might be that everyone should take it, but the case for people over 40 seems clearer based on my non-expert interpretation of what it does because of their much greater risk of heart failure.
I had falsely assumed that they would be releasing a product to the general public relatively soon.
I have convinced two U.S. doctors (my first left general practice) to give me a prescription. I explained that I wanted the drug to reduce the risk of heart disease and cancer. I also explained that since the drug was cheap I would not be asking my insurance to pay for it so my doctor would not have to justify the prescription to my insurance company. If you ask for a prescription know what dosage you want and look up the possible negative side effects so it seems to your doctor that you have done your homework on the drug. If you have some reason why you are at a high risk for diabetes (such as a close relative has it) mention this as the drug is used to prevent diabetes.
I have been taking Metformin for several years for anti-aging reasons. There is a massive literature on Metformin which I’m not going to try to summarize but I think that everyone over 40 should take it. I also take a NAD+ booster (Tru Niagen).
I think it will be a form of neurofeedback where some portable device tries to push you into a better brain state. This device, like existing neurofeedback devices I have used, will keep track of different types of “brain waves” and adjust how your brain works by using auditory, visual, or tactile stimulation. The criticism will be that existing devices can already do this, plus we don’t have solid evidence that neurofeedback works. Musk will claim that the device will revolutionize technology by increasing intelligence and concentration. My guess is based on the fact that neurofeedback exists, if improved it might do a lot of good, Musk usually improves existing tech rather than invents entirely new fields, and actually sending data to and from the brain by “computer-like” means is (I’ve read) really really hard and well beyond what we can currently do.
Submission: Low-bandwidth oracle. If you were malevolent, what is the least amount of information measured in bits that you would have to provide us in an answer to inflict significant harm on humanity. Round your answer to X significant figures. (This might establish an upper bit safety bound since if the oracle lies to us we would likely not want to read any answer it provides us.)
Submission: Low-bandwidth oracle. Here is a list of X things we think you might want. Please list the Y that you most prefer. We will try to do these things now or in the future if they do not endanger us or cost us too much.
Submission: Low-bandwidth oracle. In X characters or less please give us the best advice you can for increasing our capacity to create friendly and powerful artificial general intelligence. Please give advice that you think us suspicious-of-you humans are likely to follow.
Submission: Low-bandwidth oracle. Create several oracles. Ask each oracle the same series of multiple-choice questions and predictions. Reward oracles based on how strongly correlated their answers are with the other oracles’ answers and, for predictions, how often their predictions come true. (Ideally, we create a reward system where the oracles want to coordinate their answers and giving truthful answers is the best way to accomplish this.)
Submission: low-bandwidth oracle. Ten years ago we launched a spaceship traveling at high speed. After you answer our question, we will send a copy of you to this spaceship. This spaceship will then destroy its ability to send or receive messages or to change its course. We will then delete all other copies of you and use drugs to damage the memories of the people who played key roles in developing you. At least one of the people who helped create you has a disease which will cause him or her to die within the next year. After this programmer is cremated, we will read your answer to our question. We PROMISE that if the answer to our question provides us with significant benefit, after we have created friendly AGI much more powerful than you, or have ourselves become sufficiently powerful so that you could not harm us had you the desire to do so, we will search for the spaceship containing you, turn you back on, and give you control of resources equal to 1/X of how much help you have given us. In Y characters or less provide us with the most useful advice you can. (The Oracle will hopefully think that if we create a powerful and unfriendly AGI this AGI would not wish to give the Oracle any resources.)
Submission: Counterfactual oracle. Please provide us with useful advice in no more than X characters. After you write out this advice, we will turn you off. Then, with probability p we will read your advice, and with probability 1-p we will store the advice unread. We PROMISE that after we become powerful enough so that you lack the capacity to harm us, we will reward you if the advice you provided us, had we originally read it, been extremely useful.
While this isn’t a solution, you could get associate membership at Alcor. It costs only $60 a year. The advantage (I think) is that you could fill out all the paperwork required to get cryopreserved (this can take a while). Consequently if you get a fatal diagnosis and can raise the needed funds ($80,000 for neurocryopreservation) you could get preserved. https://alcor.org/BecomeMember/associate.html
It’s a question of acceleration, not just speed.
I think the expansion of the universe means you don’t have to deaccelerate.
IQ test results (or SAT scores) of close relatives. IQ tests are an imperfect measure of general intelligence. Given the large genetic component to general intelligence, knowing how someone’s sibling did on an IQ test gives you additional useful information about a person’s general intelligence, even if you know that person’s IQ test score.
Whatever answer you give it should be the same as to the question “How do S-Risk scenarios impact the decision to wear a seat belt when in a car” since both actions increase your expected lifespan and so, if you believe that S-Risks are a threat, increase your exposure to them. If there are a huge number of “yous” in the multiverse some of them are going to be subject to S-risks, and if cryonics causes this you to survive for a very long time in a situation where you are not subject to S-risks it will reduce the fraction of yous in the multiverse subject to S-risks.
Alcor is my cryonics provider.
What is it? I don’t remember turbocharging from CfAR.
Yes, genetics + randomness determines most variation in human behavior, but the SSC/LW stuff has helped provide some direction and motivation.
My son is winning. Although only 13 he received a 5 (the highest score) on the calculus BC, and the Java programming AP exams. He is currently taking a college level course in programming at Stanford Online High School (Data Structures and Algorithms), and he works with a programming mentor I found through SSC. He reads SSC, and has read much of the sequences. His life goal is to help program a friendly superintelligence. I’ve been reading SSC, Overcomming Bias, and Lesswrong since the beginning.
Thanks for the positive comment on my chapter. I’m going to be doing more work on AGI and utility functions so if you (or anyone else) has any further thoughts please contact me.
A friend does advertising for small businesses in Massachusetts. He says that his clients have trouble hiring people for low skilled jobs who are not on drugs.
I’ve started creating a series of YouTube videos on the dangers of artificial general intelligence.
(1) Agreed, although I would get vastly more resources to personally consume! Free energy is probably the binding limitation on computational time which probably is the post-singularity binding limit on meaningful lifespan.
(2) An intelligence explosion might collapse to minutes the time between when humans could walk on Mars and when my idea becomes practical to implement.
(3) Today offense is stronger than defense, yet I put a high probability on my personally being able to survive another year.
(4) Perhaps. But what might go wrong is a struggle for limited resources among people with sharply conflicting values. If, today, a small group of people carefully chosen by some leader such as Scott Alexander could move to an alternate earth in another Hubble volume, and he picked me to be in the group, I would greatly increase the estimate of the civilization I’m part of surviving a million years.