I think it will be a form of neurofeedback where some portable device tries to push you into a better brain state. This device, like existing neurofeedback devices I have used, will keep track of different types of “brain waves” and adjust how your brain works by using auditory, visual, or tactile stimulation. The criticism will be that existing devices can already do this, plus we don’t have solid evidence that neurofeedback works. Musk will claim that the device will revolutionize technology by increasing intelligence and concentration. My guess is based on the fact that neurofeedback exists, if improved it might do a lot of good, Musk usually improves existing tech rather than invents entirely new fields, and actually sending data to and from the brain by “computer-like” means is (I’ve read) really really hard and well beyond what we can currently do.
Submission: Low-bandwidth oracle. If you were malevolent, what is the least amount of information measured in bits that you would have to provide us in an answer to inflict significant harm on humanity. Round your answer to X significant figures. (This might establish an upper bit safety bound since if the oracle lies to us we would likely not want to read any answer it provides us.)
Submission: Low-bandwidth oracle. Here is a list of X things we think you might want. Please list the Y that you most prefer. We will try to do these things now or in the future if they do not endanger us or cost us too much.
Submission: Low-bandwidth oracle. In X characters or less please give us the best advice you can for increasing our capacity to create friendly and powerful artificial general intelligence. Please give advice that you think us suspicious-of-you humans are likely to follow.
Submission: Low-bandwidth oracle. Create several oracles. Ask each oracle the same series of multiple-choice questions and predictions. Reward oracles based on how strongly correlated their answers are with the other oracles’ answers and, for predictions, how often their predictions come true. (Ideally, we create a reward system where the oracles want to coordinate their answers and giving truthful answers is the best way to accomplish this.)
Submission: low-bandwidth oracle. Ten years ago we launched a spaceship traveling at high speed. After you answer our question, we will send a copy of you to this spaceship. This spaceship will then destroy its ability to send or receive messages or to change its course. We will then delete all other copies of you and use drugs to damage the memories of the people who played key roles in developing you. At least one of the people who helped create you has a disease which will cause him or her to die within the next year. After this programmer is cremated, we will read your answer to our question. We PROMISE that if the answer to our question provides us with significant benefit, after we have created friendly AGI much more powerful than you, or have ourselves become sufficiently powerful so that you could not harm us had you the desire to do so, we will search for the spaceship containing you, turn you back on, and give you control of resources equal to 1/X of how much help you have given us. In Y characters or less provide us with the most useful advice you can. (The Oracle will hopefully think that if we create a powerful and unfriendly AGI this AGI would not wish to give the Oracle any resources.)
Submission: Counterfactual oracle. Please provide us with useful advice in no more than X characters. After you write out this advice, we will turn you off. Then, with probability p we will read your advice, and with probability 1-p we will store the advice unread. We PROMISE that after we become powerful enough so that you lack the capacity to harm us, we will reward you if the advice you provided us, had we originally read it, been extremely useful.
While this isn’t a solution, you could get associate membership at Alcor. It costs only $60 a year. The advantage (I think) is that you could fill out all the paperwork required to get cryopreserved (this can take a while). Consequently if you get a fatal diagnosis and can raise the needed funds ($80,000 for neurocryopreservation) you could get preserved. https://alcor.org/BecomeMember/associate.html
It’s a question of acceleration, not just speed.
I think the expansion of the universe means you don’t have to deaccelerate.
IQ test results (or SAT scores) of close relatives. IQ tests are an imperfect measure of general intelligence. Given the large genetic component to general intelligence, knowing how someone’s sibling did on an IQ test gives you additional useful information about a person’s general intelligence, even if you know that person’s IQ test score.
Whatever answer you give it should be the same as to the question “How do S-Risk scenarios impact the decision to wear a seat belt when in a car” since both actions increase your expected lifespan and so, if you believe that S-Risks are a threat, increase your exposure to them. If there are a huge number of “yous” in the multiverse some of them are going to be subject to S-risks, and if cryonics causes this you to survive for a very long time in a situation where you are not subject to S-risks it will reduce the fraction of yous in the multiverse subject to S-risks.
Alcor is my cryonics provider.
What is it? I don’t remember turbocharging from CfAR.
Yes, genetics + randomness determines most variation in human behavior, but the SSC/LW stuff has helped provide some direction and motivation.
My son is winning. Although only 13 he received a 5 (the highest score) on the calculus BC, and the Java programming AP exams. He is currently taking a college level course in programming at Stanford Online High School (Data Structures and Algorithms), and he works with a programming mentor I found through SSC. He reads SSC, and has read much of the sequences. His life goal is to help program a friendly superintelligence. I’ve been reading SSC, Overcomming Bias, and Lesswrong since the beginning.
Thanks for the positive comment on my chapter. I’m going to be doing more work on AGI and utility functions so if you (or anyone else) has any further thoughts please contact me.
A friend does advertising for small businesses in Massachusetts. He says that his clients have trouble hiring people for low skilled jobs who are not on drugs.
I’ve started creating a series of YouTube videos on the dangers of artificial general intelligence.
(1) Agreed, although I would get vastly more resources to personally consume! Free energy is probably the binding limitation on computational time which probably is the post-singularity binding limit on meaningful lifespan.
(2) An intelligence explosion might collapse to minutes the time between when humans could walk on Mars and when my idea becomes practical to implement.
(3) Today offense is stronger than defense, yet I put a high probability on my personally being able to survive another year.
(4) Perhaps. But what might go wrong is a struggle for limited resources among people with sharply conflicting values. If, today, a small group of people carefully chosen by some leader such as Scott Alexander could move to an alternate earth in another Hubble volume, and he picked me to be in the group, I would greatly increase the estimate of the civilization I’m part of surviving a million years.
Because of the expansion of space I think that if you get far enough away from earth, you will never be able to return to earth even if you travel at the speed of light. If we become a super-advanced civilization we could say that if you want to colonize another solar system we will put you on a ship that won’t stop until the ship is sufficiently far from earth so that neither you nor any of your children will be able to return. Given relativity if this ship can more fast enough it won’t take too long in ship time to reach such a point. (I haven’t read everything at the links so please forgive me if you have already mentioned this idea.)
If there was a decentralized singularity and offence proved stronger than defense I would consider moving to a light cone that couldn’t ever intersect with the light cone of anyone I didn’t trust.
Yes, but Adams explains at length how Trump is a master persuader, as with, for example, this Tweet “The day President Trump made his critics compare The Boy Scouts of America to Hitler Youth.” I lot of what Adams says is P vs NP stuff where it’s hard to figure out yourself but once someone explains it to you it seems obvious.
What is your evidence that he is a shill? Millions of Americans support Trump, are they all shills?
Adams makes lots of falsifiable claims, but not about Trump’s character.
Matthew 22:21 Jesus said “Render to Caesar the things that are Caesar’s”.