Deadlines and AI theory

The AI is a real-time algorithm—it has to respond to situation in the real time. The real-time systems have to trade time for accuracy, and/​or face deadlines.

The straightforward utility maximization may look viable for multiple choice questions, but for write-in problems, such as technological innovation, the number of choices is so huge (1000 variables with 10 values each, 101000) , that the AI of any size—even galaxy spanning civilization of Dyson spheres—has to employ generative heuristics. Same goes for utility maximization in presence of 1000 unknowns that have 10 values each—if the values are to interact non-linearly, all the combinations, or a representative number thereof, have to be processed. There one has to trade accuracy of processing utility of a case for number of cases processed.

In general, the AIs of any size (excluding the possibility of unlimited computational power within finite time and space) will have to trade accuracy of it’s adherence to it’s goals, for time, and thus have to implement methods that have different goals, but are faster computationally, whenever those goals are reasoned to increase expected utility taking into consideration the time constraints.

Note that in a given time, the algorithm with lower big-O complexity is able to process dramatically larger N, and the gap increases with the time allocated (and with CPU power). For example, you can bubblesort number of items proportional to square root of the number of operations, but you can quicksort the number of items proportional to t/​W(t) where W is the product-log function and t is the number of operations; this grows approximately linearly for large t. So for the situations where exhaustive search is not possible, gaps between implementations increases with extra computing power; the larger AIs benefit more from optimizing themselves.

The constraints get especially hairy when one is to think of massively parallel system that is operating with speed-of-light lag between the nodes, and where the time of retrieval is O(n13) .

This seems to be a big issue for FAI going FOOM. The FAI may, with perfectly friendly motives, abandon the proved-friendly goals for the simpler to evaluate, simpler to analyze goals that may (with ‘good enough’ confidence that needs not necessarily be >0.5) produce friendliness as instrumental, if that increases the expected utility given the constraints. I.e. the AI can trade ‘friendliness’ for ‘smartness’ when it expects the ‘smarter’ self to be more powerful, but less friendly, when this trade increases the expected utility.

Do we accept such gambles as inevitable in the process of the FAI? Do we ban such gambles, and face the risk that uFAI (or any other risk) may beat our FAI even if starting later?

In my work as graphics programmer, I am often facing specifications which are extremely inefficient to precisely comply with. The Maxwell’s Equations are an extreme example of this. Too slow to process to be practical for computer graphics. I often have to implement code which is uncertain to comply well with specifications, but which would get the project done in time—I can’t spend CPU-weeks rendering an HD image for cinema at the ridiculously high resolution which is used—much less so in the real time software. I can’t carelessly trade CPU time for my work time, when the CPU time is a major expense, even though I am well paid for my services. One particular issue is with applied statistics. Photon mapping. The RMS noise falls off as 1/​sqrt(cpu instructions) , the really clever solutions fall off as 1/​(cpu instructions) , and the gap between naive, and efficient implementation has been increasing due to Moore’s law (we can expect it to start decreasing some time in the far future when the efficient solutions are indiscernible from reality without requiring huge effort on the part of the artists; alas, we are not quite there yet, and it is not happening for another decade or two).

Is there a good body of work on the topic? (good work would involve massive use of big-O notation and math)

edit: ok, sorry, period in topic.