I feel like this was a sort of fractal parable, where the first two paragraphs should be enough to convey the point; but for readers who don’t get it by then, it keeps beating you over the head with successively longer, more detailed, and more blatant forms of the point until the final denouement skips the “parable” part altogether.
khafra
We need names for this phenomenon, in which the excess cognitive capacity of an AI, not needed for its task, suddenly manifests itself
It is so much like absurdist SF, that’s the perfect source for the name—The Marvin Problem: “Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? ’Cos I don’t.”
There’s an article type called “You Could Have Invented” that I became aware of on reading Gwern’s You Could Have Invented Transformers.
This type dates back to at least 2012. I believe they’re usually good zetetic explanations.
In a stereotypical old-west gunfight, one fighter is more experienced and has a strong reputation; the other fighter is the underdog and considered likely to lose. But who’s the underdog of a grenade fight inside a bank vault? Both sides are overwhelmingly likely to lose.
At least one side of many political battles believe they’re in a grenade fight, where there’s little or nothing they can do to prevent the other side from destroying a lot of value. and could reasonably feel like an underdog even if they have a full bandolier of grenades and the other side has only one or two.
I don’t think “perfect” is a good descriptor for the missing solution. The solutions we have lack (at least) two crucial features:
1. A way to get an AI to prioritize the intended goals, with high enough fidelity to work when AI is no longer extremely corrigible, as today’s AIs are (because they’re not capable enough to circumvent human methods of control).
2. A way that works far enough outside of the training set. E.g., when AI is substantially in charge of logistics, research and development, security, etc.; and is doing those things in novel ways.
Robin Hanson’s model of quiet vs loud aliens seems fundamentally the same as this question, to me.
Linear probes give better results than text output for quantitative predictions in economics. They’d likely give a better calibrated probability here, too.
I, too, would like to know how long it will be until my job is replaced by AI; and what fields, among those I could reasonably pivot to, will last the longest.
I think it’s especially true for the type of human that likes Lesswrong. Using Scott’s distinction between Metis and Techne, we are drawn to Techne. When a techne-leaning person does a deep dive into metis, that can generate a lot of value.
More speculatively, I feel like often—as in the case of lobbying for good government policy—there isn’t a straightforward way to capture any of the created value; so it is under-incentivized.
Well, that was an interesting top-down processing error.
Note that Alexander Kruel still blogs regularly on axisofordinary.blogspot.com, and from his Facebook account; he just doesn’t say anything directly about rationalists. He mostly lists recent developments in AI, science, tech, and the Ukraine war.
I’ve done some Aikido and related arts, and the unbending arm demo worked on me (IIRC, it was decades ago). But learning the biomechanics also worked. More advanced, related skills, like relaxing while maintaining a strongly upright stance, also worked best by starting out with some visualizations (like a string pulling up from the top of my head, and a weight pulling down from my sacrum).
But having a physics-based model of what I was trying to do, and why it worked, was essential for me to really solidify these skills—and incorrect explanations, which I sometimes got at first, did not help me. Could just be more headology, though—other students seemed to be able to do well based off the visualizations and practice.
https://www.lesswrong.com/posts/rZX4WuufAPbN6wQTv/no-really-i-ve-deceived-myself seems relevant.
Good timing—the day after you posted this, a round of new Tom & Jerry cartoons swept through twitter, fueled by transformer models which included in their layers MLPs that can learn at test time. Github repo here: https://github.com/test-time-training (The videos are more eye-catching, but they’ve also done text models).
It may be time to revisit this question. With Owain Evans et. al. discovering a generalized evil vector in LLMs, and older work like [Pretraining Language Models with Human Preferences](https://www.lesswrong.com/posts/8F4dXYriqbsom46x5/pretraining-language-models-with-human-preferences) that could use a follow-up, AI in the current paradigm seems ripe for some experimentation with parenting practices in pre-training—perhaps something like affect markers for the text that goes in, or pretraining on children’s literature before going on to the more technically and morally complex text?
I haven’t run any experiments of my own, but this doesn’t seem obviously stupid to me.
When there’s little incentive against classifying harmless documents, and immense cost to making a mistake in the other direction, I’d expect overclassification to be rampant in these bureaucracies.
Your analysis of the default incentives is correct. However, if there is any institution that has noticed the mounds of skulls, it is the DoD. Overclassification, and classification for inappropriate reasons (explicitly enumerated in written guidance: avoiding embarrassment, covering up wrongdoing) is not allowed, and the DoD carries out audits of classified data to identify and correct overclassification.
It’s possible they’re not doing enough to fight against the natural incentive gradient toward overclassification, but they’re trying hard enough that I wouldn’t expect positive EV from disregarding all the rules.
As someone who has been allowed access into various private and government systems as a consultant, I think the near mode view for classified government systems is different for a reason.
E.g., data is classified as Confidential when its release could cause damage to national security. It’s Secret if it could cause serious damage to national security, and it’s Top Secret if it could cause exceptionally grave damage to national security.
People lose their jobs for accidentally putting a classified document onto the wrong system, even if it’s still owned by the government and protected (but, protected at an insufficient level for the document). People go to jail for putting classified data onto the wrong system on purpose, even if they didn’t intend to, say, sell it to the Chinese government.
Bringing in personnel who haven’t had the standard single-scope background investigation and been granted a clearance, and a new set of computers which has not gone through any accreditation and authorization process, and giving unrestricted write and read access to classified data is technically something the president could allow. But it’s a completely unprecedented level of risk to assume; and AFAICT the president has not actually written any authorizations for doing this.
There is, actually, a Government Accounting Office which does audits; they have identified billions in fraud, waste, and abuse, identified the perpetrators for punishment, and remediated the programs at fault. They have done it without unprecedented breaches in national security, or denying lawful, non-fraudulent payments from the US Treasury.
(Also, outside of my personal area of expertise, I believe denying lawful, non-fraudulent payments from the US Treasury is crossing a really big Chesterton’s Fence. GPT-4o estimated a $1T-$5T impact from treasury bond yield spread, forex USD reserves, CDS spreads on US foreign debt, loss of seignorage in global trade, depending on how rare and targeted the payment denial is).
The quoted paragraph is a reference to a CS Lewis essay about living under the threat of global thermonuclear war. The euphony and symmetry with the original quote is damaged by making it slightly more accurate by using that phrase instead of “if we are going to be destroyed by Zizianism.”
This is the most optimistic believable scenario I’ve seen in quite a while!
And yet it behaves remarkably sensibly. Train a one-layer transformer on 80% of possible addition-mod-59 problems, and it learns one of two modular addition algorithms, which perform correctly on the remaining validation set. It’s not a priori obvious that it would work that way! There are other possible functions on compatible with the training data.
Seems like Simplicia is missing the worrisome part—it’s not that the AI will learn a more complex algorithm which is still compatible with the training data; it’s that the simplest several algorithms compatible with the training data will kill all humans OOD.
I don’t think that’s it. I think he meant that humanism was created by incentives—e.g., ordinary people becoming economically and militarily valuable in a way they hadn’t historically been. The spectre, and now rising immantization, of full automation is reversing those incentives.
So, it’s less a problem with the attitudes of our current elites or the memes propagated on the Internet. It’s more a problem with the context in which anybody achieving the rank of elite, and any meme on human value which goes viral, is shaped by the evolving incentive structure in which most humans are not essential to the success of a military or economic endeavor.