For this effect to work, it needs a coherence time of at least 100 microseconds, which is long relative to what you would expect in a warm & wet environment, but short compared to the time scales humans usually operate on.
Jeffrey Heninger
Notes on an Experiment with Markets
Against a General Factor of Doom
Whole Bird Emulation requires Quantum Mechanics
Adding a compass is unlikely to also make the bird disoriented when exposed to a weak magnetic field which oscillates at the right frequency. Which means that the emulated bird will not behave like the real bird in this scenario.
You could add this phenomenon in by hand. Attach some detector to your compass and have it turn off the compass when these fields are measured.
More generally, adding in these features ad hoc will likely work for the things that you know about ahead of time, but is very unlikely to work like the bird outside of its training distribution. If you have a model of the bird that includes the relevant physics for this phenomenon, it is much more likely to work outside of its training distribution.
You Can’t Predict a Game of Pinball
Unfortunately, decisions about units are made by a bunch of unaccountable bureaucrats. They would rather define the second in terms that only the techno-aristocracy can understand instead of using a definition that everyone can understand. It’s time to turn control over our systems of measurement back to the people !
#DemocratizeUnits
Superintelligence Is Not Omniscience
It seems like your comment is saying something like:
These restrictions are more relevant to an Oracle than to other kinds of AI.
In practice, smoothness interacts with measurement: we can usually measure the higher-order bits without measuring lower-order bits, but we can’t easily measure the lower-order bits without the higher-order bits. Imagine, for instance, trying to design a thermometer which measures the fifth bit of temperature but not the four highest-order bits. Probably we’d build a thermometer which measured them all, and then threw away the first four bits! Fundamentally, it’s because of the informational asymmetry: higher-order bits affect everything, but lower-order bits mostly don’t affect higher-order bits much, so long as our functions are smooth. So, measurement in general will favor higher-order bits.
There are examples of measuring lower-order bits without measuring higher-order bits. If something is valuable to measure, there’s a good chance that someone has figured out a way to measure it. Here is the most common example of this that I am familiar with:
When dealing with lasers, it is often useful to pass the laser through a beam splitter, so part of the beam travels along one path and part of the beam travels along a different path. These two beams are often brought back together later. The combination might have either constructive or destructive interference. It has constructive interference if the difference in path lengths is an integer multiple of the wavelength, and destructive interference if the difference in path length is a half integer multiple of the wavelength. This allows you to measure changes in differences in path lengths, without knowing how many wavelengths either path length is.
One place this is used is in LIGO. LIGO is an interferometer with two multiple kilometer long arms. It measures extremely small ( $ 10^{-19} $ m) changes in the difference between the two arm lengths caused by passing gravitational waves.
The Broader Fossil Fuel Community
A TAI which kills all humans might also doom itself
Horizontal and Vertical Integration
What The Lord of the Rings Teaches Us About AI Alignment
The Lord of the Rings tells us that the hobbit’s simple notion of goodness is more effective at resisting the influence of a hostile artificial intelligence than the more complicated ethical systems of the Wise.
The miscellaneous quotes at the end are not directly connected to the thesis statement.
One of the tactics listed on RationalWiki’s description of the AI-box experiment is:
Jump out of character, keep reminding yourself that money is on the line (if there actually is money on the line), and keep saying “no” over and over
From Yudkowsky’s description of the AI-Box Experiment:
The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
My Current Thoughts on the AI Strategic Landscape
I don’t believe that “current AI is at human intelligence in most areas”. I think that it is superhuman in a few areas, within the human range in some areas, and subhuman in many areas—especially areas where the things you’re trying to do are not well specified tasks.
I’m not sure how to weight people who think most about how to build AGI vs more general AI researchers (median says HLAI in 2059, p(Doom) 5-10%) vs forecasters more generally. There’s a difference in how much people have thought about it, but also selection bias: most people who are skeptical of AGI soon are likely not going to work in alignment circles or an AGI lab. The relevant reference class is not the Wright Brothers, since hindsight tells us that they were the ones who succeeded. One relevant reference class is the Society for the Encouragement of Aerial Locomotion by means of Heavier-than-Air Machines, founded in 1863, although I don’t know what their predictions were. It might also make sense to include many groups of futurists focusing on many potential technologies, rather than just on one technology that we know worked out.
I find the idea that intelligence is less useful for sufficiently complex systems or sufficiently long time frames interesting. Or at least the kind of intelligence that helps you make predictions. My intuition is that there is something there, although it’s not quite the thing you’re describing.
I agree that the optimal predictability of the future decays as you try to predict farther into the future. If the thing you’re trying to predict in the technical sense, you can make this into a precise statement.
I disagree that the skill needed to match this optimum typically has a peak. Even for extremely chaotic systems, it is typically possible to find some structure to it that is not immediately obvious. Heuristics are sometimes more useful than precise calculations, but building good heuristics and know how to use them is itself a skill that improves with intelligence. I suspect that the skill needed to reach optimum usually monotonically increases with longer prediction times or more complexity.
Instead, the peak appears in the marginal benefit of additional intelligence. Consider the difference in prediction ability between two different intelligences. At small time / low complexity, there is little difference because both of them are very good at making predictions. A large times / complexity, the difference is again small because, even though neither is at optimum, the small size of the optimum limits how far apart they can be. The biggest difference can be seen at the intermediate scales, while there are still good predictions to be made, but they are hard to make.
A picture of how I think this works, similar to Figure 1, is linked here: https://drive.google.com/file/d/1-1xfsBWxX7VDs0ErEAc716TdypRUdgt-/view?usp=sharing
As long as there are some other skills relevant for most jobs that intelligence trades off against, we would expect the strongest incentives for intelligence to occur in the jobs where the marginal benefit of additional intelligence is the largest.