Hello! I’m here because...well, I’ve read all of HPMOR, and I’m looking for people who can help me find the truth and become more powerful. I work as an engineer and read textbooks for fun, so hopefully I can offer some small insights in return.
I’m not comfortable with death. I’ve signed up for cryonics, but still perceive that option as risky. As a rough estimate, it appears that current medical research is about 3% of GDP and extends lifespans by about 2 years per decade. I guess that if medical research spending were increased to 30% of current GDP, then most of us would live forever while feeling increasingly healthy. Unfortunately, raising taxes to achieve this is not realistic—doubling taxes for an uncertain return is a hard sell, and I have been unable to find research quantifying the link between public research spending and healthcare technology improvements. Another approach is inventing a technology to increase the overall economy size by 10x, by creating a practical self-replicating robot. This is possible in principle (as demonstrated by Hod Lipson in 2006 and by FANUC robot arm factories daily) but I am currently not a good enough programmer to design and build a fully automated RepRap assembly system in a reasonable amount of time. Also, there are many smart and innovative people at Willow Garage, FANUC and other similar organizations, and it seems unlikely I could exceed the slow and incremental progress of those groups. A third option, trying to create super-level AI to make self-replicating robots for me, is even more difficult and unlikely. A fourth option, not taking heroic responsibility, would make me uncomfortable because I’m not that optimistic about the future. As it is, since dropping out of a PhD program I’m not confident in my ability to complete such a large project. Any practical help would be appreciated, as I would prefer not to rely on the untestable promises of quantum immortality, or on the faith that life is a computer game.
Thanks for the thoughtful reply!
Possible experiments could include:
Simulate Prisoner’s Dilemma agents that can run each others’ code. Add features to the competition (e.g. group identification, resource gathering, paying a cost to improve intelligence) to better model a mix of humans and AIs in a society. Try to simulate what happens when some agents gain much more processing power than others, and what conditions make this a winning strategy. If possible, match results to real-world examples (e.g. competition between people with different education backgrounds). Based on these results, make a prediction of the returns to increasing intelligence for AIs.
Create an algorithm for a person to follow recommendations from information systems—in other words, write a flowchart that would guide a person’s daily life, including steps for looking up new information on the Internet and adding to the flowchart. Try using it. Compare the effectiveness of this approach with a similar approach using information systems from 10 years ago, and from 100 years ago (e.g. books). Based on these results, make a prediction for how quickly machine intelligence will become more powerful over time.
Identify currently-used measures of machine intelligence, including tests normally used to measure humans. Use Moore’s Law and other data to predict the rate of intelligence increase using these measures. Make a prediction for how machine intelligence changes with time.
Write an expert system for making philosophical statements about itself.
In general, when presenting a new method or applied theory, it is good practice to provide the most convincing data possible—ideally experimental data or at least simulation data of a simple application.
You’re right—I am worried about the future, and I want to make accurate predictions, but it’s a hard problem, which is no excuse. I hope you succeed in predicting the future. I assume your goal is to make a general prediction theory to accurately assign probabilities to future events, e.g. an totalitarian AI appearing. I’m trying to say that your theory will need to accurately model past false predictions as well as past true predictions.
I agree that is a possible outcome. I expect multiple AIs with comparable strength to appear at the same time, because I imagine the power of an AI depends primarily on its technology level and its access to resources. I expect multiple AIs (or a mix of AIs and humans) will cooperate to prevent one agent from obtaining a monopoly and destroying all others, as human societies have often done (especially recently, but not always). I also expect AIs will stay at the same technology level because it’s much easier to steal a technology than to initially discover it.