Can we model technological singularity as the phase transition?


Technological singularity is quite similar to what happens with the system near the phase transition. If it is indeed the case and the underlying mechanisms behind singularity allow the same form of the mathematical description as underlying mechanisms of the phase transition, we can potentially use this knowledge to estimate when should we expect singularity.
I wrote this post in the following way. First, I remind those who are far from physics what is the phase transition. Second, I discuss why it resembles singularity. Third, I suggest how we can make a quantitative prediction based on it. Fourth, I finally tell you why all this is important.

What is the phase transition

The phase transition is the transition from one phase to another. (duh!). The simplest example of the phase transition would be a solid-liquid or liquid-gas transition. A little bit more complicated is the transition from ferromagnetic to paramagnetic (Curie point). The transition from one phase to another happens at a set of critical parameters. For simplicity, we will just talk about the temperature here (like the temperature of boiling or of freezing). When temperature approaches the critical temperature of the transition from one phase to another many quantities (susceptibility to the magnetic field, for example) demonstrate a power-law behavior, i.e. the quantity depends on the temperature as

where controls this power law behavior and is called the critical exponent. If , the quantity diverges, i.e., approaches infinity, when approaches . It does not mean the quantity actually becomes infinite—since the size of the system is finite, it is impossible—but close to phase transition such growth is a very good description. One of such quantities, common for practically all of the systems, is correlation length, which basically tells, how far from each other in the media (bucket of water, or magnet, for example) can be two points that still influence each other in a noticeable way. This quantity always diverges near the phase transition, i.e, basically any two points become interdependent (again, limited by the size of the system). The distance between two points does not matter anymore in this case.

Does it look like technological singularity?
First of all, of course, we don’t have a temperature. Instead, we have time $t$ that approaches the moment of singularity . As approaches , the following quantities should explode:
-knowledge about the world (call it information, or whatever)
-technology level (to make it more distinct from knowledge—let’s say that it is an extensive quantity, so producing copies of existing devices increases it)
-interdependence between remote agents (people separated geographically).
Of course, it does not mean that any of these quantities will actually reach infinity—the same as for the standard phase transition, we have finite-size effects.
The interdependence between people is an obvious analogy to correlation length, which is common for all systems. Other quantities are specific for the system at hand (humanity).
The qualitative analogy kind of makes sense (well, at least to me). Let us see what can be done to make it quantitative rather than qualitative.

Suggestions for stricter analysis

The way to make this statement more quantitative would be to construct a measurable quantity corresponding to knowledge, technology, and interdependence. This is not that trivial. Should we just look at all the information on the Internet? Or all papers published? Or all code written? Or, maybe we should look at how much information we can produce and look at the computation power? Or maybe some combination? Since we have a finite amount of data for each of those parameters, and we can make up even more of them, we can easily get a power law simply by cherry-picking. Finally, even if we are lucky and get a nice power-law behavior for a single very reasonable parameter, it is not guaranteed that this parameter will not be saturated soon, so the power law will be terminated.

To avoid such mistakes, let us imagine the following situation. Our ancestors performed the procedure described above many years (or centuries) ago. They predicted something, based on it. Now we see, how wrong they were.
So, if someone would have looked for the power-law before the 1960s, assuming they have the necessary data, they would likely found the power law that is very well known today—the hyperbolic growth of the Earth’s population. Before the era of computers, the number of people can be a good estimation of computation power, so in some sense derivative of the information (more computation power—more information gain per unit time). Then our ancient singularity scientist could make a prediction when the singularity is supposed to happen, based on it, obtaining that it will happen in 2026. This was actually done here.
The growth of the population stopped being hyperbolic in 1960-1970, however, the structure of information gain changed. For example, in the same 1960-1970 computers became more popular. The number of people is a bad measure of computation power since then.
However, how wrong was the date prediction? The current predictions for the singularity date vary from as early as 2030 to as late as 2100. Since the prediction was made in 1960, it means that it suggested the singularity in 66 years from now (“now” of prediction) while correct would be something in between 70 and 140. Even in the latter case, it is just slightly more than twice off, and in the earliest, the prediction is almost exact.
This history lesson teaches us two things. First, very likely the quantity we suggested will stop experience power-law behavior before the singularity. Second, it is quite possible that we still get the right order of magnitude for the date (at least, such prediction might be more reasonable and data-based than current predictions based on intuition of participants).
So, the question is now, what should be our quantities?

What quantities would work for singularity?

The first idea, based on what is discussed above, is total computation power. It makes sense to try add both humans and computers. Obviously, the computation power of the brain is way more than that of the usual PC, but most of it does not lead to knowledge increment. Thus, it might be that on average one person will be equivalent to a smartphone, and the coefficient of proportionality will be determined from the data fit. It is the first thing to check. If it works, cool. If no, maybe we should think harder. Maybe we do not need to take into account all computation power but only that that is used for scientific projects? How much computer power is going to it, and how much human hours is going to it? Maybe something else?
I think these ideas are at least worth checking. Unfortunately, my googling abilities are sufficiently below average, so I was not able to retrieve the data I needed to check the idea. Here I will need your help.
Why is it worth doing? First, I personally think it is a very nice scientific problem, and if singularity indeed can be described as the phase transition, it is beautiful.
Second, and the most important. If we predict the time of singularity more exactly, we can prepare better for it. Imagine the data fit clearly shows you that singularity will happen in 6 years. Then it suddenly becomes more important than any global problem that bothers humanity now. Even more important than many of my personal problems. If it is the case, the thing I will be most sorry for in my life is that I procrastinated writing this text for half of a year.