More than 5mins, because it’s fun, but:
For convenience I shall gloss this “12 orders of magnitude’ thing to “suddenly impossibly fast”.
Are there energy and thermal implications here? If we did 12 orders of magnitude more computation for what it costs today, we could probably only do it underwater at a hydroelectric damn. Things, both our devices and eventually the rest of the atmosphere, would get much warmer.
Disk and memory are now our bottlenecks for everything that used to be compute-intensive. We probably set algorithms to designing future iterations which can be built on existing machinery, and end up with things that work great but are even further into incomprehensibility than current iterations. There’s a link out there where they had a computer design a thing on an FPGA and it ended up with an isolated bit of “inoperative” circuitry which the design failed to work without; expect a whole lot more of this kind of magic.
Socially, RIP all encryption from before the compute fairy arrived, and most from shortly afterwards. It’ll be nice to be able to easily break back into old diaries that I encrypted and lost the keys to, but that’s about the most ethical possible application and it’s still not great.
Socio-politically, imagine if deep fakes just suddenly appeared in like 2000. Their reception would be very different from what it is today because today we have a sort of memetic herd immunity—there exists a “we” for “we know things at or below a certain quality or plausibility could be fakes”. ‘We’ train our collective System 1 on a pattern of reacting to probable-fakes a certain way, even though the fakes can absolutely fool someone without those patterns. Letting a technology grow up in an environment with pressures from those who want to use it for “evil” and those who want to prevent that from happening shapes the tech, in a way that a sudden massive influx of computing power would not have time to be shaped by.
Programming small things by writing tests and then having your compiler search for a program that passes all the tests gets a lot more feasible. Thus, “small” computers (phone, watch, microwave, newish refrigerator, recent washing machine) can be tweaked into “doing more things”, which in practice will probably mean observing us and trying to do something useful with that data. Useful to whom? Well, that depends. Computers still do what humans ask them to, and I am unconvinced that we can articulate what we collectively mean by “general intelligence” well enough to ask even an impossibly fast machine for it. We could hook it up to some biofeedback and say “make the system which makes me the most excited/frightened/interested”, but isn’t that the faster version of what video games already are?
Thank you for writing this up. Would you be willing to share what your household’s (you make it sound like a group of > 2 adults making decisions as peers) stated group goals were around December 2019, March 2020, and December 2020? Your post sounds to me like it describes the experience of falling onto several nested loops of optimizing for a proxy or aspect of a goal rather than the goal itself. (Microcovid measurement and isolation as proxy for avoiding contracting or spreading a pathogen, absolute minimization of exposure chance as proxy for general health and well-being, etc) This makes me wonder what kind of house agreement or instantiation of “let’s keep everyone safe and healthy” type goals could have been sufficient to catch a rationalist in this type of spiral earlier