One measure of status is how far outside the field of accomplishment it extends. Using American public education as the standard, Leibniz is only known for calculus.
ryan_b
there is not any action that any living organism, much less humans, take without a specific goal
Ah, here is the crux for me. Consider these cases:
Compulsive behavior: it is relatively common for people to take actions without understanding why, and for people with OCD this even extends to actions that contradict their specific goals.
Rationalizing: virtually all people actively lie to themselves about what their goals are when they take an action, especially in response to prodding about the details of those goals after the fact.
Internal Family Systems and related therapies: the claim on which these treatments rest is that every person intrinsically has multiple conflicting goals of which they are generally unaware, and the learning how to mediate them explicitly is supposed help.
The hard problem of consciousness: similar to the above, one of the proposed explanations for consciousness is that it serves as a mechanism for mediating competing biological goals.
These are situations where either the goal is not known, or it is fictionalized, or it is contested (between goals that are also not known). Even in the case of everyday re-actions, how would the specific goal be defined?
I can clearly see an argument along the lines of evolutionary forces providing us with an array of specific goals for almost every situation, even when we are not aware of them or they are hidden from us through things like self-deception. That may be true, but even given that it is true I come to the question of usefulness. Consider things like food:
I claim most of the time we eat, because we eat. As a goal it is circular.
We might eat to relieve our stomach growling, or to be polite to our host, and these are specific goals, but these are the minority cases.
Or sex:
Also circular, the goal is usually sex qua sex.
Speaking for myself, even when I had a specific goal of having children (making explicit the evolutionary goal!), what was really happening under the hood is I was having sex qua sex and just very excited about the obvious consequences.
It doesn’t feel to me like thinking of these actions in terms of manipulation adds anything to them as a matter of description or analysis. Therefore when talking about social things I prefer to use the word manipulation for things that are strategic (by which I mean we have an explicit goal and we understand the relationship between our actions and that goal) and unaligned (which I mean in the same sense you described in your earlier comment, the other person or group would not have wanted the outcome).
Turning back to the post, I have a different lens for how to view How To Win Friends and Influence People. I suggest that these are habits of thought and action that work in favor of coordination with other people; I say it works the same way rationality works in favor of being persuaded by reality.
I trouble to note that this is not true in general of stuff about persuasion/influence/etc. A lot of materials on the subject do outright advocate manipulation even as I use the term. But I claim that Carnegie wrote a better sort of book, that implies pursuing a kind of pro-sociality in the same way we pursue rationality. I make an analogy: manipulators are to people who practice the skills in the book as Vulcan logicians are to us, here.
A sports analogy is Moneyball.
The counterfactual impact of a researcher is analogous to the insight that professional baseball players are largely interchangeable because they are all already selected from the extreme tail of baseball playing ability, which is to say the counterfactual impact of a given player added to the team is also low.
Of course in Moneyball they used this to get good-enough talent within budget, which is not the same as the researcher case. All of fantasy sports is exactly a giant counterfactual exercise; I wonder how far we could get with ‘fantasy labs’ or something.
I agree that processor clock speeds are not what we should measure when comparing the speed of human and AI thoughts. That being said, I have a proposal for the significance the fact that the smallest operation for a CPU/GPU is much faster than the smallest operation for the brain.
The crux of my belief is that having faster fundamental operations means you can get to the same goal using a worse algorithm in the same amount of wall-clock time. That is to say, if the difference between the CPU and neuron is ~10x, then the CPU can achieve human performance using an algorithm with 10x as many steps as the algorithm that humans actually use in the same clock period.
If we view the algorithms with more steps than human ones as sub-human because they are less computationally efficient, and view a completion of the steps of an algorithm such that it generates an output as a thought, this implies that the AI can get achieve superhuman performance using sub-human thoughts.
A mechanical analogy: instead of the steps in an algorithm consider the number of parts in a machine for travel. By this metric a bicycle is better than a motorcycle; yet I expect the motorcycle is going to be much faster even when it is built with really shitty parts. Alas, only the bicycle is human-powered.
It isn’t quoted in the above selection of text, but I think this quote from same chapter addresses your concern:
“I instantly saw something I admired no end. So while he was weighing my envelope, I remarked with enthusiasm: “I certainly wish I had your head of hair.” He looked up, half-startled, his face beaming with smiles. “Well, it isn’t as good as it used to be,” he said modestly. I assured him that although it might have lost some of its pristine glory, nevertheless it was still magnificent. He was immensely pleased. We carried on a pleasant little conversation and the last thing he said to me was: “Many people have admired my hair.” I’ll bet that person went out to lunch that day walking on air. I’ll bet he went home that night and told his wife about it. I’ll bet he looked in the mirror and said: “It is a beautiful head of hair.” I told this story once in public and a man asked me afterwards: “’What did you want to get out of him?” What was I trying to get out of him!!! What was I trying to get out of him!!! If we are so contemptibly selfish that we can’t radiate a little happiness and pass on a bit of honest appreciation without trying to get something out of the other person in return—if our souls are no bigger than sour crab apples, we shall meet with the failure we so richly deserve.”
Out of curiosity, what makes this chapter seem Dark-Artsy to you?
So the smarter one made rapid progress in novel (to them) environments, then revealed they were unaligned, and then the first round of well established alignment strategies caused them to employ deceptive alignment strategies, you say.
Hmmmm.
I don’t see this distinction as mattering much: how many ASI paths are there which somehow never go through human-level AGI? On the flip side, every human-level AGI is an ASI risk.
I would perhaps urge Tyler Cowen to consider raising certain other theories of sudden leaps in status, then? To actually reason out what would be the consequences of such technological advancements, to ask what happens?
At a guess, people resist doing this because predictions about technology are already very difficult, and doing lots of them at once would be very very difficult.
But would it be possible to treat increasing AI capabilities as an increase in model or Knightian uncertainty? It feels like questions of the form “what happens to investment if all industries become uncertain at once? If uncertainty increases randomly across industries? If uncertainty increases according to some distribution across industries?” should be definitely answerable. My gut says the obvious answer is that investment shifts from the most uncertain industries into AI, but how much, how fast, and at what thresholds are all things we want to predict.
I’m inclined to agree with your skepticism. Lately I attribute the low value of the information to the fact that the organization is the one that generates it in the first place. In practical terms the performance of the project, campaign, etc. will still be driven by the internal incentives for doing the work, and it is not remotely incompatible for bad incentives to go unchanged leading to consistently failing projects that are correctly predicted to consistently fail. In process terms, it’s a bit like what’s happening with AI art when it consumes too much AI art in training.
The way info from the non-numerate gets incorporated into financial markets today is that more sophisticated people & firms scrape social media or look at statistics (like generated by consumer activity). markets do not need to be fully accessible for markets to be accurate.
I agree with this in general, but it doesn’t seem true for the specific use-case motivating the post. The problem I am thinking about here is how to use a prediction market inside an organization. In this case we cannot rely on anyone who could get the information to put it into the market because the public does not participate—we either get the specific person who actually knows to participate, or the market lacks the information.
I expect this to run into all the usual problems of getting people at work to adopt a toolchain unrelated to their work. These projects normally fail; it looks like it needs to be basically zero effort to bet your information for it to work, which is heroically difficult.
I really want to read the takedown of Helion.
I like the reasoning on the front, but I disagree. The reason I don’t think it holds is because the Western Front as we understand it is what happened after the British Expeditionary Force managed to disrupt the German offensive into France, and the defenses that were deployed were based on the field conditions as they existed.
What I am proposing is that initial invasion go directly into the teeth of the untested defenses which were built for the imagined future war (which was over a period of 40 years or so before actual war broke out). I reason these defenses contained all of the mistaken assumptions which the field armies made and learned from in the opening months of the war in our history, but built-in and having no time or flexibility to correct in the face of a general invasion. Even if Britain eventually enters the war, I strongly expect there would be no surprise attack by the expeditionary force during Germany’s initial invasion, and so predict the Germans take Paris.
That being said, my reasoning does work in reverse and so supports your proposed plan: if we are able to persuade Germany of the historically proven defenses and update them about the true logistical burden, they absolutely could greet the French with a Western Front-grade of defenses on their side of the border. This provides more than enough time to subjugate Russia before mobilization, or perhaps drive them to surrender outright with confirmation that their chief ally is useless. The less aggressive option with France makes the British and US entries into the war even less likely, I’d wager.
Frankly, conquering France isn’t even a real win condition, it was just what I expected because that’s where the invasion went historically. This makes the whole affair look simpler, where Germany and Austria-Hungary are able to prosecute a war on just the Russian and Balkan fronts, it stops being a world war and reduces to a large European war, and they get to exploit the territorial gains going forward.
My idea is a smaller intervention, but I think I like yours better!
Indeed you might—in fact I suggested attacking through the French border directly in the other question where we aid Germany/Austria rather than try to prevent the war.
The idea of defending against France is an interesting one—the invasion plans called for knocking out France first and Russia second based on the speed with which they expected each country to mobilize, and Russia is much slower to conquer just based on how far everyone has to walk. Do you estimate choosing to face an invasion from France would be worth whatever they gain from Russia, in the thinking of German command?
I genuinely don’t know anything about Germany’s plans for Russia post invasion in the WW1 case, so I cannot tell.
Under these conditions yes, through the mechanism of persuading German High Command to invade through the French border directly rather than going through Belgium. Without the Belgian invasion, Britain does not enter the war (or at least not so soon); without Britain in the war Germany likely does not choose unrestricted submarine warfare in the Atlantic; without unrestricted submarine warfare the US cannot be induced to enter the war on the side of the French.
As to why the direct invasion would work, we have the evidence from clashes in the field that the German armies were in general superior to the French ones, including those with defensive positions, and field experience also showed that the innovations which went into the new defenses (and the war generally) were poorly understood and inefficiently used (I have in mind here particularly the habit of radically overshooting targets and extreme underestimates of the supply requirements to sustain fire).
My extremely rough guess is that the fortifications along the border add a few days to a week of delay, with the rest of the German strategy and timetable going according to plan.
My best path for a yes is through the mechanism of Great Britain being very explicit with Germany about their intent to abide by the 1839 Treaty of London.
For context, this is the one where the signatories promise to declare war on whoever invades Belgium, and was Britain’s entry point into the war. There were at least some high ranking military officers who believed that had Britain said specifically that they would go to war if Belgium were invaded, Germany would have chosen not to invade.
Power seeking mostly succeeds by the other agents not realizing what is going on, so it either takes them by surprise or they don’t even notice it happened until the power is exerted.
Yet power seeking is a symmetric behavior, and power is scarce. The defense is to compete for power against the other agent, and try to eliminate them if possible.
I agree with this, and I am insatiably curious about what was behind their decisions about how to handle it.
But my initial reaction based on what we have seen is that it wouldn’t have worked, because Sam Altman comes to the meeting with a pre-rallied employee base and the backing of Microsoft. Since Ilya reversed on the employee revolt, I doubt he would have gone along with the plan when presented a split of OpenAI up front.
I agree in the main, and I think it is worth emphasizing that power-seeking is a skillset, which is orthogonal to values; we should put it in the Dark Arts pile, and anyone involved in running an org should learn it at least enough to defend against it.
A few years after the fact: I suggested Airborne Contagion and Air Hygiene for Stripe’s (reprint program)[https://twitter.com/stripepress/status/1752364706436673620].