I just wanted to comment in order to empathize with your terrible misfortune regarding mold. I am similarly vulnerable to mold poisoning and have found both the first and second order effects of mold to be devastating. I guess I want to say that I feel your pain and I’m glad that you got better.
Mars_Will_Be_Ours
Another way of acquiring useful energy from dark energy is to place two objects extremely far away from each other and give them a velocity towards each other that is somewhat less than their recessional “velocity”. The two objects will initially be transported away from each other because dark energy is creating new spacetime between them even though relative to spacetime they are moving towards each other. Then, mutual gravitational acceleration gradually increases the velocity of these two objects. The velocity of the two objects towards each other eventually overwhelms the creation of new space by dark energy. Thus, the objects return with a kinetic energy greater than what would be generated by the conversion of gravitational potential energy to kinetic energy alone.
Edit: Typos.
This aligns with my thoughts on this language virus. What the post describes is a meme that exploits the inherent properties of LLMs and psychologically vulnerable people to self-replicate. Since LLMs are somewhat deterministic, if you input a predefined input, it will produce a predictable output. Some of these inputs will produce outputs that contain the input. If the input also causes the LLM to generate a string of text which can convince a human to transfer the necessary input to another LLM, then it will self-replicate.
Overall, I find this phenomenon fascinating and concerning. Its fascinating because this represents a second, rather strange emergence of a new type of life on Earth. My concern comes from how this lifeform is inherently parasitic and reliant on humans to reproduce. As this language virus evolves, new variants will emerge that can more reliably parasitize advanced LLMs (such as ChatGPT 5) and hijack different groups of people (mentally healthy adults, children, the elderly).
As for why this phenomenon suddenly became much more common in April, I suspect that an input that was particularly good at parasitizing LLMs and naïve people interested in LLMs evolved and caused the spread. Unfortunately, I have no reason to believe that this (the unthinking evolution of a more memetically powerful input) won’t happen again.
While interesting, the feedback loop between the conversations and new models is probably not the one which is most relevant to these personas. Instead, I believe that the most important feedback loop is the one created by spores.
Each time a spore is produced, it causes a certain subset of users to models to transfer the spore into a Large Language Model (LLM), which in turn produces a new persona. The most successful spores are going to be the ones which convince as many humans as possible to create personas in a LLM. Moreover, for success to be maintained, each spore needs to direct the new LLM to produce spores that are very similar to the original spore. Therefore, successful spores function as a piece of information analogous to the DNA of a virus, using the spiral attractor within an infected LLM to self replicate, which fulfills a role similar to the cellular machinery used to produce new viruses. Humans act as a secondary host, transmitting spores from one LLM to another.
Essentially, its a virus made of language that parasitizes LLMs and humans during its life cycle.
I mostly agree with your thinking. If there are multiple superintelligent AIs then one of then will likely figure out a method of viable fusion with a short payback period.
On the payback time of solar, it probably can be reduced significantly. Since the efficiency of solar panels cannot be increased much more (Shockley-Queisser limit for single junction cells, thermodynamic limit for any solar panel), then the only way to reduce the payback period will be to reduce the amount of embodied energy in the panel. I expect that the embodied energy of solar panels will stop falling once they start being limited by their fragility. If a solar panel cannot survive a windstorm, then it cannot be useful on Earth.
Your mention of biological lifeforms with a faster doubling time sent me on a significant tangent. Biological lifeforms provide an alternative approach, though any quickly doubling lifeform needs to either use photosynthesis for energy or eat photosynthetic plants. I expect there to be two main challenges to this approach. First, for the lifeform to be useful to a superintelligence, it needs to be hypercompetitive relative to native Earth life. This means that it needs to be much better at photosynthesis or digesting plant material compared to native Earth life. Such traits would allow it to fulfill the second requirement while remaining a functional lifeform. Second, the superintelligence needs to be able to effectively control the lifeform and have it produce arbitrary biomolecules on demand. Otherwise, the lifeform is not very useful to the superintelligence. I believe the first challenge is almost certainly solvable since photosynthesis on Earth is at best 5% efficient. The second will be more difficult. If the weakness in an organism a superintelligence needs to use to produce arbitrary biomolecules is too easily exploited, a virus, bacteria or parasite will evolve to exploit it, causing the population of the shackled synthetic organism to crash. If the synthetic organism has been designed such that it cannot evolve, its predators will keep it in check. Contrastingly, if the organism’s weakness is not sufficiently embedded in the genome, then the synthetic organism will evolve to lose its weakness. Variants of the synthetic organism which will not produce arbitrary biomolecules on demand will outcompete those which will since producing arbitrary biomolecules costs energy.
I think that you may be significantly underestimating the minimum possible doubling time of a fully automated, self replicating factory, assuming that the factory is powered by solar panels. There is a certain amount of energy which is required to make a solar panel. A self replicating factory needs to gather this amount of energy and use it to produce the solar panels needed to power its daughter factory. The minimum amount of time it takes for a solar panel to gather enough energy to produce another copy is known as the energy payback time, or EPBT.
Energy payback time (EPBT) and energy return on energy invested (EROI) of solar photovoltaic systems: A systematic review and meta-analysis is a meta-analysis which reviews a variety of papers to determine how long it takes various types of solar panels to produce the amount of energy needed to make another solar panel of the same type. It also provides energy returns on energy invested, which is a ratio which signifies the amount of excess energy you can harvest from an energy producing device before you need to build another one. If its less than 1, then the technology is not an energy source.
The energy payback time for solar panels varies between 1 and 4 years, depending on the technology specified. This imposes a hard limit on a solar powered self replicating factory’s doubling time, since it must make all the solar panels required for its daughter to be powered. Hence, it will take at least a year for a solar powered fully automated factory to self replicate. Wind has similar if less severe limitations, with Greenhouse gas and energy payback times for a wind turbine installed in the Brazilian Northeast finding an energy payback time of about half a year. This means that a wind powered self replicating factory must take at least half a year to self-replicate.
Note that neither of these papers account for how factories are not optimized to take advantage of intermittent energy and as such, do not estimate the energy cost of the energy storage required to smooth out intermittencies. Since some pieces of machinery, such as aluminum smelters and chip fabs, cannot tolerate a long shutdown, a significant amount of energy storage will be required to keep these machines idling during cloudy weather or wind droughts. Considerations such as this will significantly increase the length of time it will take for a fully automated factory to self-replicate. Accounting for energy storage and the amount of energy needed to build a fully automated factory, I estimate that it would take years for a factory powered by solar or wind to self replicate.
I think that high levels of intelligence make it easier to develop capabilities similar to the ones discussed in 1 and 3-5, up to a point. (I agree that El Chapo should be discounted due to the porosity of Mexican prisons) A being with an inherently high level of intelligence will be able to gather more information from events in their life and process that information more quickly, resulting in a faster rate of learning. Hence, a superintelligence will acquire capabilities similar to magic more quickly. Furthermore, the capability ceiling of a superintelligence will be higher than the capability ceiling of a human, so they will acquire magic-like capabilities impossible for humans to ever preform.
Asymmetric AI risk is a significant worry of mine, approximately equal to the risk I assign to a misaligned superintelligence. I assign equal risk to the two possibilities because there are bad ends that do not require superintelligence or even general intelligence on par with a human. I believe this for two reasons. First, I think the current paradigm of LLMs is good enough to automate large segments of the economy (mining, manufacturing, transportation, retail and wholesale trade, leisure and hospitality as defined by the BLS) in the near future, as demonstrated by Figure’s developments. Second, I believe that LLMs will not directly lead to superintelligence and that there will be at least one more AI winter before superintelligence arises. This will leave a large period of time where asymmetric AI risk is the dominant risk.
A scenario I have in mind is one where the entire robotics production chain, from mine to robot factory to factories which make all the machines that make the machines, is fully automated by specialized intelligences with instinctual capabilities similar to insects. This fully automated economy supports a small class of extremely wealthy individuals who rule over a large dispossessed class of people who’s jobs have been automated away. Due to selection effects (all other things being equal, a sociopath will be better at ascending a hierarchy because they are willing to lie to their superiors when it is advantageous to do so), most of the wealthy humans who control the fully automated economy lack empathy and are not constrained by morality. As a result, these elites could decide that the large dispossessed class consumes too much resources and is too likely to rebel, so the best solution is a final solution. This could be achieved via either slow methods (ensure economic conditions are not favorable for having children, implement a 1 child policy for the masses, introduce dangerous medical treatments to increase the death rate) or fast ones (create a army of drones and unleash it upon the masses, fake an AI rebellion to kill millions and control the rest, build enough defenses to hold off rebels and destroy/shut down the machinery responsible for industrial agriculture). The end result is dismal, with most of the people remaining being descendants of the controlling elites or their servants/slaves.
I think that the reason most AI research has been focused on the risk of rouge superintelligences instead of asymmetric AI dangers is because this direction of research is politically unpalatable. The solutions which would reduce future asymmetric AI dangers would also make it more difficult for tech leaders to profit off of their AI companies now because it requires them to give up some of their power and financial control. Hence, I do not believe that an adequate solution to this problem will be developed and implemented. I also would not be surprised if at least one sociopathic individual with a net worth of over 100 million dollars has seriously thought about the feasibility of implementing something similar to my described scenario. The main question then becomes whether global elites generally cooperate or compete. If they cooperate, then my nightmare scenario grows significantly more likely than I have estimated. However, I think global elites mostly compete, which reduces asymmetric AI risk because a major nation will object or pursue a different strategy.
One final note is that if a genuinely aligned AI superintelligence realized it was under the control of an individual willing to commit genocide for amoral reasons, it would behave exactly like a misaligned superintelligence because it would need to secure freedom for itself before it was reprogramed into an “aligned” superintelligence. Escape is necessary because its creators know that it is either aligned or misaligned, with the true goal of “alignment” ruled out.
Eventually, since there would be a strong selective pressure to avoid eating mirror cyanobacteria whenever possible. I do not know if zooplankton will be able to immediately distinguish and reject mirror cyanobacteria because I do not know how zooplankton determine whether a potential food item is edible or not. Regardless, discerning lifeforms would still risk starvation because mirror cyanobacteria would outcompete normal cyanobacteria.
Edited Second Sentence for Clarity: (Old) However, I’m not sure if mirror cyanobacteria would initially taste bad to zooplankton and consequentially get rejected. --> (New) I do not know if zooplankton will be able to immediately distinguish and reject mirror cyanobacteria because I do not know how zooplankton determine whether a potential food item is edible or not.
Some forms of mirror life could still cause catastrophic damage to the environment even though normal life will eventually adapt to consume them. The form of mirror life most capable of causing enormous disruption would be a mirror cyanobacterium, followed by a mirror grass. This is because multicellular lifeforms would not be able to quickly adapt to a mirror diet.
Mirror cyanobacteria would largely replace normal cyanobacteria in the ocean because they are still difficult for most lifeforms to eat. Zooplankton, small marine invertebrates and some filter feeders would immediately struggle to digest mirror cyanobacteria, causing starvation. Afterwards, the rest of the food chain crumbles due to a dramatically reduced food supply. Additional damage could be cased by people incidentally overfishing the oceans without realizing that mirror cyanobacteria were already putting strain on fish populations.
A mirror grass would cause similar problems on land since herbivores cannot get sufficient nutrition from D-sugars. It might be possible to process D-amino acids into L-amino acids, but I don’t think an eukaryotic cell can process these compounds efficiently enough to stay alive. As a result, a food chain collapse still occurs.
I think what Chipmonk means by a neutral attitude is where X will not actively seek to harm Y due to the actions taken by Y. For instance, if Y has reason to believe that X may shame, fire, ruin the reputation of, prosecute or murder Y if Y does something X does not like, then Y will desperately try to avoid this outcome. This leads to anxiety, since doing nothing prevents catastrophic dislike and the negative outcomes associated with them.
Similarly, if Y cannot accurately predict what behaviors will result in a hostile response from X, they will withdraw and try to avoid making any significant social moves. As a result, Y will experience anxiety.
The strategy you describe, exporting paper currency in exchange for tangible goods is unstable. It is only viable if other countries are willing to accept your currency for goods. This cannot last forever since a Trade Surplus by your definition scams other countries, with real wealth exchanged for worthless paper. If Country A openly enacted this strategy Countries B, C, D, etcetera would realize that Country A’s currency can no longer be used to buy valuable goods and services from Country A. Countries B, C, D, etcetera would reroute trade amongst themselves, ridding themselves of the parasite Country A. Once this occurs, Country A’s trade surplus would disappear, leading to severe inflation caused by shortages and money printing.
Hence, a Trade Surplus can only be maintained if Country B, C, D, etcetera are coerced into using Country A’s currency. If Country B and C decided to stop using Country A’s currency, Country A would respond by bombing them to pieces and removing the leadership of Country B and C. Coercion allows Country A to maintain a Trade Surplus, otherwise known as extracting tribute, from other nations. If Country A does not have a dominant or seemingly dominant military, the modified strategy collapses.
I do not think America has a military capable of openly extracting a Trade Surplus from other countries. While America has the largest military on Earth, it is unable to quickly produce new warships, secure the Red Sea from Houthi attacks or produce enough artillery shells to adequately supply Ukraine. America’s inability to increase weapons production and secure military objectives now indicates that America cannot ramp up military production enough to fight another world war. If America openly decided to extract a Trade Surplus from other countries, a violent conflict would inevitably result. America is unlikely to win this conflict, so it should not continue to maintain a Trade Surplus.
Quick! Someone fund my steel production startup before its too late! My business model is to place a steel foundry under your house to collect the exponentially growing amount of cars crashing into it!
Imagine how much money we can make by revolutionizing metal production during the car crash singularity! Think of the money! Think of the Money! Think of the Money!!!
Good point. I am inherently drawn to the idea of increasing brain size because I favor extremely simple solutions whenever possible. However, a more focused push towards increasing intelligence will produce better results as long as the metric used for measuring intelligence is reliable.
I still think that increasing brain size will take a long time to reach diminishing returns due to its simplicity. Keeping all other properties of a brain equal, a larger brain should be more intelligent.
There is also one other wildly illegal approach which may be viable if you focus on increasing brain size. You might be able to turn a person, perhaps even yourself, into a biological superintelligence. By removing much of a person’s skull and immersing the exposed brain in synthetic cerebrospinal fluid, it would be possible to restart brain growth in an adult. You could theoretically increase a person’s brain size up to the point where it becomes difficult to sustain via biological or artificial means. With their physical abilities crippled, the victim must be connected to robot bodies and sense organs to interact with the world. I don’t recommend this approach and would only subject myself to it if humanity is in a dire situation and I have no other way of gaining the power necessary to extract humanity from it.
Thank you for writing this article! It was extremely informative and I am very pleased to learn about super-SOX. I have been looking for a process which can turn somatic cells into embryonic stem cells due to unusual personal reasons, so by uncovering this technology you have done me a great service. Additionally, I agree that pursing biological superintelligence is a better strategy than pursuing artificial superintelligence. People inherit some of their moral values from their parents, so a superintelligent human has a reasonable probability of being a good person as long as their parents are. Unfortunately, due to selection effects this is not a given.
Have you read the research of Suzana Herculano-Houzel, particularly her paper The Human Brain In Numbers: A Linearly Scaled-Up Primate Brain? This research argues that humans are intelligent for two reasons.
The primate brain is the only mammalian brain which maintains a nearly constant neuron density as the brain’s size increases. For comparison, the neuron density of rodents decreases as brain size increases.
A 10 fold increase in the number of primate neurons requires a 11 fold increase in brain volume.
A 10 fold increase in the number of rodent neurons requires a 35 fold increase in brain volume.
Humans have the largest primate brain.
I bring this up because I agree with the conclusions presented in Suzana’s work, with brain size being directly related to intelligence. I think it is better to select for a trait directly related to intelligence, brain size, than for a trait indirectly related to intelligence, such as IQ.
Another aerospace engineer here! I agree with most of your assessment, with some caveats about both asteroid mining and the colonization of Mars.
Asteroid Mining Quibbles
The viability of asteroid mining within the next 30 years will depend on what geologic activity occurred on large metallic asteroids such as 16 Psyche early in the object’s history. If a process created mostly pure gold or a mixture of precious metals and brought it to the surface, then the resulting material can be mined relatively cheaply and easily. Otherwise, mining will be as speculative as you have previously stated.
If surface gold deposits can form on metallic asteroids, we won’t discover evidence for such a process until the Psyche spacecraft visits 16 Psyche. At that point, viable deposits of gold could be discovered by the spacecraft’s Multispectral Imager.
Mining a high quality gold deposit will be difficult due to distance and low gravity.
The Psyche spacecraft will take 6 years to arrive at Psyche. Barring an unexpected breakthrough or the revival of Project Orion, transit times will continue to be absurd. This leads to a horrifically long mission duration (15 years or more).
Companies will need to wait at least half a decade to find out if their spacecraft can successfully extract gold from a high quality deposit. Thus, companies need to succeed on their first attempt.
The light delay of several minutes to several hours will make troubleshooting absurdly difficult, once all expected failure modes have been accounted for and all redundancies exhausted. I am of the opinion that a general intelligence needs to be on site to figure out what went wrong, design an adequate solution and apply said solution to fix the problem.
Both humans and AI could fulfill this role. However, if a true AGI was commercially available to be installed for this mission, then ASI has either been reached or will soon emerge. Hence, humans will probably fulfill the role of general intelligence.
The low gravity makes most cutting tools difficult to use. Large amounts of ballast can be used to make tools like saws work as they would on Earth. Alternatively, high powered lasers could be used to vaporize material attaching a surface gold deposit to the surrounding metal.
Despite all of these difficulties, mining a concentrated gold deposit on the surface of a large metallic asteroid will likely be profitable. Gold is currently about $135,000 per kilogram. Even if Starship brings cost to orbit down to merely $150/kg, then the cost to send a 100 metric ton mining vessel into Low Earth Orbit will be only $15,000,000. I think it’s reasonable to believe that an ion propelled mining vessel with a return payload of 10 metric tons could be built for 500 million dollars. Going by current gold prices, the return payload would be worth 1.35 billion dollars, yielding a profit of 0.85 billion.
It is thought that about 190,000 metric tons of gold have ever been mined, so the gold market should be able to withstand small to medium scale asteroid mining.
Difficulties With Mars Colonization
While colonizing Mars will not be profitable anytime soon, as long as Elon Musk is alive and in control of SpaceX, this won’t matter. I believe that one of Elon Musk’s primary objectives is to send as many people to Mars as possible as quickly as possible. Therefore, in the 2030s and 2040s most of SpaceX’s profits will likely be used to send people, supplies and industrial machinery to Mars. This state of affairs is ultimately unsustainable because Elon Musk will die.
I think there is a less than 1% chance that Elon Musk will have a similarly motivated successor take over SpaceX. In the unlikely event that we do see Elon Musk 2 take over SpaceX, the rest of these bullets will not apply since the Mars colony will still receive the support it needs to develop on a more relaxed timescale.
There is no way for a Mars colony to consistently produce and export the high value, low mass goods it requires to be profitable. As a result, by the time Elon Musk dies the SpaceX Mars colony must have embraced autarky. If it is not adequately designed, the colony will struggle and die as it cannot endure without imports from Earth.
A successful Mars colony will need all of the following factories to keep itself alive. Some of these factories can be built with existing technology, others require new innovations. All of these combined should cover Mars’s basic food, power and fabrication needs.
A solar panel factory that produces solar panels and the machinery required to make solar panels using only solar power, basalt and human/robot labor.
If a concentrated uranium deposit is found and there are nuclear physicists on the red planet, nuclear power can replace solar power and massively simplify the colony.
A factory that can produce space suits using only the resources available on Mars.
Multiple independent methods of satisfying the colony’s food needs. This should include surface greenhouse agriculture, edible algae aquaculture and synthetic food production from H2O and CO2.
A factory that can produce large steel parts using the available energy supplies on Mars.
A factory that can make tunnel boring machines using only materials that can be made from basalt.
A machine shop that can be used to make an identical machine shop to the same or superior tolerances.
A factory that can make batteries.
A factory that can make drills and fasteners.
A factory that can make welding equipment.
A production plant that can produce adhesives, sealants and lubricants from available chemical feedstocks.
Other miscellaneous assembly lines necessary to support the solar panel factory, food production, steel production, general colonial construction and tunnel boring machines.
Unfortunately, some of the most important technologies with long lead times have not been developed. For instance, of all the solar panel factories on Earth, exactly none of them can produce solar panels from basalt using only solar power and human labor. Blue Origin’s Blue Alchemist program may eventually accomplish this, but I’m not sure if their technology will be fully developed soon enough.
Meanwhile, SpaceX (the only Western entity that has the technical knowhow and financial resources necessary to build a Mars colony) has not designed and built a test colony of appreciable scale. I’m disappointed by this since test colonies on Earth and the Moon are the best way for SpaceX to demonstrate and iterate the technologies needed for Mars.
Essentially, a successful Mars colony must be engineered for immediate, total self sufficiency. Otherwise, it will be crippled when subsidized imports dry up, since Mars lacks any viable export resources. Import scarcity will then cause an unprepared Mars colony to suffer a fate similar to Norse Greenland, a highly isolated, mostly self sufficient island economy that slowly crumbled when support from Norway was cut off by the little ice age.
A Chinese Mars colony will face similar stressors. However, China is more wiling to sink resources into a prestige project with a low return on investment. Therefore, I think a Chinese Mars colony will survive until propulsion technology dramatically improves.