Why you used Chinese models? You can use models with larger context window and put more data about Borges, even Claude Code itself (instructing it never look on the original text of the story). While the result unlikely to be verbatim, a close coincident would mean that you have a good LLM-model of Borges (sideload).
If you fine-tuned open LLM on the story, how you prevent pure memorizing and how you can distinguish memorized output from the original writing?
avturchin
My point was that prices are dictated by social structure, not by potential abundance of goods. For example, software and movies can be copied unlimitedly, so we have informational abundance, but still have to pay for them—or become pirates and face potential legal risks.
Will Taliban members in Afghanistan enjoy post-scarcity abundance? No.
Will prison inmates get it? No.
Russians? Drug-addicts? Trump-supporters? People who said Y word 10 years ago?
You get it—many groups of people will not have legal access to the abundance. If we list all such groups, likely most people will not get it. And those who will get it, are already rich.
Yes, P doom are meaningless until we have some idea how it can be change. If P doom will have absolutely fixed probability, we can just ignore it.
If we have timing, small changes in it are meaningless, but if it is order of magnitude changes, it has implication on how I spend my remaining life.
Can be useful if we look for genetic material of some late person, but his mother or her daughter is still alive.
One way to convert this probability estimates into something actionable is to convert them into time estimates—how much time we have to find solution for AI Safety. It depends of the shape of probability curve and our lowest acceptable risk estimate.
There is an observation that 10 000 rule’s violation results in 100 near-miss accidents and 1 death (not exact number, just my approximate memory and can vary in different situations). A person can calculate the number of near-misses he survived and calculate if he is affected by survivorship bias.
Moon’s pole craters are extremely cold and stable. We will be there soon.
I think you missed the most important candidat—Venus. Venus may had earth-like conditions until the last billion of years when its surface was completely replaced during some global magma eruption. Because of interplanetary panspermia, oceans on Venus exchange biological materials with Earth. Some life may still remain in clouds. All this is not my ideas—I’ve read an article about it.
Venus had less water and thus larger dry surfaces which can accelerate biological evolution of animals (more ideas generate in any period of time and stronger competition) and it could get intelligent life long before Earth. Higher temperature prevented such slowing events as Snowball Earth.
Remnants of Venusian civilization can still exist somewhere in Solar system—like remnants of landers on moon. Some self-replicating robots can persist for long time and observed as UFOs.
All said above is extremely speculative but more probable than aliens from Europa oceans.
I used Claude Code for my genome analyze and results were great. Also it can provide interesting answers even to stupid questions like—what is my ability to lucid dream? What is my IQ?
But if I will not survive until aligned AI and my cryopreservation fails, when digital immortality remains an option. Moreover, we can do some form of it even now as sideloading. Sideloading doesn’t require AGI, as a lot of work is done manually.
One of the most described people is Leo Tolstoy and most LLMs are trained on all his writings. We can invoke him as a sideload using a relatively short prompt. https://chatgpt.com/g/g-678cee351af881918ef4f3219b4888b5-leo-tolstoy
Here is the paradox—people do not generalise the badness of death. All agree that “my death near term is bad”, but they do not generalize to “death is bad” in a sense that the goal of society should be fight death.
I put my 23andMe raw data in Claude Code + recent tests results and got amazing medical advisor.
For example, I ask it questions like: - If I take this new drug what kind of side effects I should expect given my genetics?
In the morning I have a certain type of unpleasant sensation - цофе is it and what should i do this it?
I suggest to label April Fool’s pranks in text as AI can use it as a source in its training and get the date wrong.
Something like Safe Superintelligence seems better. Or better to say Safe Superintelligent Singleton, because if we got two competing Safe Superintelligences, it still can end into war. BTW, I don’t buy in advance claims that they will safely value-handshake, its too unpredictable.
I think that turning from “creating friendly superintelligence” to “alignment” term was a mistake which opened a slippery slope in the direction of small and local solutions. “Alignment” completely missed the need of creation global friendly Singleton. And now we need to write long texts explaining that when we say “alignment” we don’t mean “alignment of some AI to some human’s goal” but preveting creation of deadly superintelligence.
Yes, there are people with eidetic memory, so we can hope that there is a part of the brain which records everything constantly—maybe I just don’t have access to it.
Kim Peek memorized 12000 books, but it is only 12 GB of data—insane for a person but trivial for computer. He also didn’t train himself but was a savant.
Typical size of human consciously accessible memory was estimated by Landauer and is 1-2 GB.
I would add that brain memory capacity is over-estimated and my digital exoself has more data than I remember.
Yes, but it has strongest predictive power.
I made similar experiments with Nabokov’s story Spring in Fialta. I put first period and asked to continue in Nabokov’s style.