(A) There is a limited, non-replenishable amount of free energy in the universe.
(B) Everything anyone does uses up free energy.
(C) When you run out of free energy you die.
(D) Anything a human could do an AI god could do at a lower free energy cost.
(E) Humans use up lots of free energy.
Consequently:
If an AI god didn’t like humans it would extinguish us.
If an AI god were indifferent towards humans it would extinguish us to save free energy.
If an AI god had many goals including friendliness towards humanity then it would have an internal conflict because although it would get displeasure from extinguishing humans, killing us would allow it to have more free energy to devote to its other objectives.
We are only safe if an AI god’s sole objective is friendliness towards humanity.
One of the interesting observations in computing is that Moore’s law of processing power is almost as much a Moore’s law of energy efficiency. This makes sense since ultimately you have to deal with the waste heat, so if energy consumption (and hence heat production) were not halving roughly every turn of Moore’s law, quickly you’d wind up in a situation where you simply cannot run your faster hotter new chips.
This leads to Ozkural’s projection that increasing (GPU) energy efficiency is the real limit on any widespread economical use of AI, and given past improvements, we’ll have the hardware capability to run cost-effective neuromorphic AI by 2026 and then the wait is just software based...
If the Laws of Thermodynamics are correct then:
(A) There is a limited, non-replenishable amount of free energy in the universe.
(B) Everything anyone does uses up free energy.
(C) When you run out of free energy you die.
(D) Anything a human could do an AI god could do at a lower free energy cost.
(E) Humans use up lots of free energy.
Consequently:
If an AI god didn’t like humans it would extinguish us.
If an AI god were indifferent towards humans it would extinguish us to save free energy.
If an AI god had many goals including friendliness towards humanity then it would have an internal conflict because although it would get displeasure from extinguishing humans, killing us would allow it to have more free energy to devote to its other objectives.
We are only safe if an AI god’s sole objective is friendliness towards humanity.
It’s a bit ironic that current supercomputers are hugely less energy efficient (megaWatts) that human brains (20W).
One of the interesting observations in computing is that Moore’s law of processing power is almost as much a Moore’s law of energy efficiency. This makes sense since ultimately you have to deal with the waste heat, so if energy consumption (and hence heat production) were not halving roughly every turn of Moore’s law, quickly you’d wind up in a situation where you simply cannot run your faster hotter new chips.
This leads to Ozkural’s projection that increasing (GPU) energy efficiency is the real limit on any widespread economical use of AI, and given past improvements, we’ll have the hardware capability to run cost-effective neuromorphic AI by 2026 and then the wait is just software based...