Do you regret not investing in Anthropic? I don’t know how much the investment was for, but it seems like you could do a lot of good with 350x that amount. Is there a return level you would have been willing to invest in it for (assuming the return level was inevitable; you would not be causing the company to increase by 1000x)?
Jacobson
OpenClaw Newsletter
Just thinking out loud: If AI (or something else) causes the S&P500 to go up by 50%…
By January 21st, 2028 then you can achieve a 1324% ROI (https://finance.yahoo.com/quote/SPY280121C01000000/)
By June 16, 2028 then you can achieve a 659% ROI (https://finance.yahoo.com/quote/SPY280616C01000000/)
By December 15, 2028 then you can achieve a 315% ROI (https://finance.yahoo.com/quote/SPY281215C01000000/)
Curious to hear others thoughts, especially those who have short AGI timelines. *Of course this is not financial advice
On scary Moltbook posts—
Main point that seems relevant here is that it is not possible to determine whether posts are from an agent or a human. A human could easily send messages pretending to be an agent via the API, or tell their agent to send certian messages. This leaves me skeptical. Furthermore, OpenClaw agents have configured personalities, one can easily tell their agent to be anti-human and post anti-human posts (this leaves a lot more to think about beyond the Moltbook forum).
Main point that seems relevant here is that it is not possible to determine whether posts are from an agent or a human. A human could easily send messages pretending to be an agent via the API, or tell their agent to send certian messages. This leaves me skeptical. Furthermore, OpenClaw agents have configured personalities, one can easily tell their agent to be anti-human and post anti-human posts (this leaves a lot more to think about beyond a forum).
I think it is more likely default techniques are sufficent then default market or government is sufficent. Markets don’t incentives non-harmful products, regulation does. Regulation can be slow. If you believe in a rapid intelligence explosion it seems there is a high chance there is not sufficent market regulation. On the other hand, our morals are mostly evolved, so you can imagine that an AI that understands things in the same regard as we do shares our same morals.
It’s hard to imagine a economic/political system that doesn’t eventually lead to an intelligence explosion. Maybe specific rules are easier to imagine: for example if you had a country in which building any form of intelligence was forbiden (I am not advocating for this), there wouldn’t be an intelligence explosion. Such countries could have similar levels of growth all the way up to the advent of machine intelligence. It’s important to remember though such countries could be caught in power prisoner-dilemmas which would give them a strong incentive to not have such rules.
I am saying that if there is no true randomness, then the universe could be seen as a predictable program. If we are in a simulation seemingly random events are the output of a pseudorandom random number generator. If there is true randomness a superintelligent machine can’t perfectly predict the future and test the limits of the universe to determine if it is simulated.
Perhaps 100% understanding of the universe is not needed to escape. This is an intriguing point. It does seem to me most possibilities for escape require detection.
I should edit my question. What I am primarily intending to ask is this: Could a superintelligent machine with a near-complete understanding of the universe (perhaps using some approximations) determine if we are in a simulation? ← assuming no such thing as true randomness
If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
From what I’ve read he believed the U.S. should nuke them before they developed a nuclear bomb. He was extremely worried about nuclear catastrophe once multiple powers had nukes. He also advocated for bombing Kyoto instead of Hiroshima and Nagasaki reasoning that the disastrous result would prevent countries from using nuclear weapons or even developing them.
Let’s assume there is no such thing as true randomness. If this is true, and we create a superintelligent system which knows the location and properties of every particle in the universe, could we determine if we are in a simulation? (EDIT: to avoid running afoul of the impossibility of storing a complete description of the universe within the universe as @Karl Krueger pointed out, assume this includes approximations and is not exact). If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
I am especially interested in answers to the final question.
I appreciate this response.
“For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming “freezer era” and the social impact of the cryonic idea.
What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide.
If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people’s heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The “AGI-pilled” minority saw it coming, but, they never became a majority. ”
I think it is unfair to compare the public’s reaction cyronics to their potential reaction to superintelligence. Cyronics have little to no impact on the average person’s life. They are more so something that can be sought out for those interested. On the other hand, on the road to superintelligence human life will change in unignorable ways. While cyronics do not affect those who do not seek them out, superintelligence will affect all humans in mainstream society no matter if they seek it out.
Furthermore, I think the trend of increasing AI intelligence is already being noticed by society at large, although they don’t perhaps realize the full extent of the potential, or the rate of growth.
“So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies.”
I think this is a very interesting point. It seems likely to me the public will have many distinct perspectives but that these distinct perspectives will primarily be based in an e/acc or an EA/AI-saftey perspective. Interestingly this means the leaders in those groups will gain a lot more power as the public begins to understand superintelligence. It seems likely politicans stance here will be a key part of their platform.
“I can’t see an AI religion becoming dominant before superintelligence arrives”
This makes a lot of sense to me, at least in the Western world. People here care very deeply about individuality which seems incompatible with an AI religion. If someone found a way to combine these ideas however, they would be very successful.
But I think there’s definitely an opening there, for demagogues of the left or the right to step in—though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI.
I strongly agree with this point.
Jacobson’s Shortform
Original:
Questions:When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society’s reaction be? What will be the ramifications of this reaction?
I’m asking this here because I can’t find articles addressing these questions and because I want a diverse array of perspectives.
To ask a question, click on your Username (top right, you must have an account), and click Ask Question [Beta].
It appears to me as though this is no longer a feature.
This is an interesting idea. Why do you think indexes as a whole will appreciate if AI brings greater productivity. It seems to me that much of the productivity gains will lie in the hands of a few companies who will certianly grow considerably. In a world of AGI there won’t be need for many companies and monopolies will be heavily favored. While some companies may get 10x or even 100x gains, if many companies go bankrupt the actual change in the price of an index could be low, or negative.
On the other hand, if your idea catches on and people with automatable jobs, or other parties, do this en masse as a hedge against unemployment your options will certianly appreciate dramatically.
The baseliners are described in the paper as
(pg. 7)