You might want to look into taking Rapamycin to improve your immune system.
James_Miller
Agree, this is plausible. But we could see businesses predicting the future and not hiring for new jobs because it takes a while to train, and there are costs to firing.
I agree, that makes a lot of sense.
Past Automation Replaced Jobs. AI Will Replace Workers.
I asked ChatGPT to tell me how I use it to help write papers and this is what it outputted: You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.
Idea generation and expansion
You start with a core claim and use the system to enumerate adjacent arguments, examples, and counterarguments. The system functions as a breadth generator. You then select and refine only the strongest lines of argument.Structural design
You use it to test different outlines, section orders, and narrative arcs. You ask for alternative organizations, then choose the one that increases clarity and momentum. The system helps surface structural options you might not have considered.Local rewriting, not full drafting
You provide existing text and request targeted rewrites of specific sentences, paragraphs, or transitions. This keeps control over the argument while outsourcing micro-level phrasing improvements.Transition construction
You use it to bridge sections cleanly. You ask for exact sentences that connect two ideas without repetition or padding, then select or refine the best candidate.Example and analogy development
You use it to generate concrete illustrations that carry explanatory weight. You iterate until the example does real work in the argument rather than serving as decoration.Adversarial critique
You use it to stress-test claims. You ask for objections, weak points, and alternative interpretations, then revise the text to survive those critiques.Comparative evaluation
You present multiple versions of a section and ask for direct comparison. This forces the system to identify differences in clarity, force, and logic, helping you converge on a stronger version.Constraint enforcement
You impose explicit rules on style and structure, then require the system to revise until those rules are satisfied. When it fails, you reject the output and restate constraints. Over time, this trains the system within the session.Iterative narrowing
You move from broad prompts to increasingly specific ones. Early stages explore possibilities; later stages demand precision. Each round reduces variance and increases control.Final polishing under strict standards
You use the system only after the argument is already sound. At this stage, it functions as a high-level editor, tightening language and removing residual weaknesses. If it cannot meet the required standard, you discard the output.
The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.
Yes, I did that. Then got some more help from ChatGPT and made the LLM block first and put the text into it. I got something saying I had already submitted (published?) the article. My plan is to change the title and try again tomorrow.
I tried the LLM block route and it didn’t work because (I think) the system thought I had already submitted the article. I will change the title and try again tomorrow. The LLM block route is not easy for someone who isn’t a programmer and doesn’t know what was meant by it, although ChatGPT helped me figure it out. I think AI+Humans outperform humans at metacognitive skills, certainly for humans who have some brain damage (as I do).
That doesn’t seem to work. I put it in an LLM block but now it says I already published the article, although I can’t find it on the website so unless there is a delay, something has gone wrong.
I received an automatic rejection for an article. I have been writing on LessWrong since the
beginning.
I did use AI to help write the article which is why the article was rejected, but I also spent considerable time
working on it. I also used AI to help write an article I
published in January, which now has 156 karma.
https://www.lesswrong.com/posts/kLvhBSwjWD9wjejWn/precedents-for-the-unprecedented-historical-analogies-for-1
This new article is about why AI is going to destroy most jobs, and as
I wrote in it, “This essay was written with help from AI. If I could
not use AI productively to improve it, that would undermine either my
argument or my claim to expertise.”
Finally, I had a stroke two years ago and have come to rely on AI when
writing. Please allow me to publish this article.
James Miller
Professor of Economics, Smith College
Agreed, the article would have been stronger if it included successful defenses.
I used ChatGPT, where I have the $200 a month subscription, and Gemini where I have the (I think) $20 a month one. The errors of the two models are surprisingly uncorrelated so its beneficial to get advice from both.
Interesting. I’m a PhD economist and I have heard that example many, many times.
Thanks for the complement. Sorry but I’m horrible at computer formatting, don’t know how to do this, and it would probably take me 10 times longer to figure out that than it would a typical person.
Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks
I teach a course at Smith College called the economics of future technology in which I go over reasons to be pessimistic about AI. Students don’t ask me how I stay sane, but why I don’t devote myself to just having fun. My best response is that for a guy my age with my level of wealth giving into hedonism means going to Thailand for sex and drugs, an outcome my students (who are mostly women) find “icky”.
I agree that the probability that any given message is received at the right time by a civilization that can both decode it and benefit from it is extremely low, but the upside is enormous and the cost of broadcasting is tiny, so a simple expected value calculation may still favor sending many such messages. If this is a simulation, the relevant probabilities may shift because the designers may care about game balance rather than our naive astrophysical prior beliefs. The persistent strangeness of the Fermi paradox should also make us cautious about assigning extremely small probabilities to any particular resolution. Anthropic reasoning should push us toward thinking that the situation humanity is in is more common than we might otherwise expect. Finally, if we are going to send any deliberate interstellar signal at all, then there is a strong argument that it should be the kind of warning this post proposes.
The message we send goes at the speed of light. If the AI has to send ships to conquer it probably has to go slower than the speed of light.
Could be a lot of time. The Andromeda galaxy is 2.5 million light years away from Earth. Say an AI takes over next year and sends a virus to a civilization in this galaxy that would successfully take over if humans didn’t first issue a warning. Because of the warning the Earth Paperclip maximizer has to send a ship to the Andromeda civilization to take over, and say the ship goes at 90% of the speed of light. That gives the Andromeda civilization 280,000 years between when they get humanity’s warning message and when the paperclip maximizer’s ship arrives. During that time the Andromeda civilization will hopefully upgrade its defenses to be strong enough to resist the ship, and then thank humanity by avenging us if the paperclip maximizer has exterminated humanity.
It’s very possible that this will remain true after we get much more powerful AI but it’s also possible that such AI will come up with new very high marginal cost goods that the super rich end up spending a lot of money on.