Can work well as long as the enemy don’t have good intel regarding that actual number of trained dogs, even if they cannot identify the specific dogs. But I suspect there are probably ways to get the trained dog to reveal itself without actually giving up the bombs.
jmh
The other day I was wondering to myself what might be important to be thinking about. I ended up posing a query to ChatGTP related to that to see what popped out. Two of the questions most interesting to me, and somewhat framed a new meta question for me. Those two questions were:
If knowledge becomes cheap, what becomes scarce?
If AI can do most of the thinking, which complementary human skills become more valuable?
Both questions relate to the deployment and integration of AI and what that might mean for people. The meta-question I had was “What to do in an AI world?” Hardly a new question but it got me thinking about how to try framing any more targeted questions that might yield some actual information or potential plans. I decided I could put the big question into 3 separate buckets, which I labelled base on some generally well known fictions (maybe the first shouldn’t be called fiction but....)
The “If we build it we all die” world
The Middle Earth world, and
The Giskard & Olivaw world
I found, at least for me, 1 and 3 were actually quite boring and generally implied I make, and have no need to make, any real changes in how I approach living my life. The Middle Earth world not only seems to beg answers to the the above two questions.
I jotted down some thought but hardly in an organized way and then though I might post to LW to get some reactions and additional insights/thought if any cared to voice them. Part of that interaction was ChatGTP offering to help refine and reorganize. With a small amount of back and forth I got to what is offered below. I chose to let the AI write it as I really do suck at writing. I don’t think ChatGTP fully grasped some of what I was aiming at. I may continue asking myself, and trying to answer, more on these questions. Dropping some of the suggested areas and maybe adding others.
In any case:
What to Do in an AI World?
Below is the LLM “slop”, collapsed for those that hate seeing such things. While some of what was generated works well for me and does reflect what I was thinking about some is a bit off the mark. In part I think that is from the desire for some brevity and attempt to elicit other thoughts rather than really push my musing of views at the start.
However, starting with the two question, when knowledge get cheap what gets scarce and when AI can do a lot of thinking what what becomes valuable, I started from a largely economic production model. Things are scarce because few are produced relative to other things and to demand for the item. If some production is used in producing other items, as many are, as it become cheaper to produce more gets produced so one should be able to conclude that those production of the items depending on that now cheaper one will also increase. There is complementarity in production.
But “knowledge” (that’s a tricky term I think) is a bit different from something like steel or leather. What goods don’t have knowledge as a rather important input? So if knowledge is cheap one might say nothing is scarce. A bit simplistic, and incorrect, but also not completely wrong so the question itself needs a bit more examination.
I might put that in some inflation framed case. If all prices are rising at the same rate, then inflation doesn’t matter—or at least the claims are generally it doesn’t matter. In the real world inflation doesn’t manifest itself that was so inflationary periods always impact relative prices, disturb existing equilibria and largely case everyone, not just the lenders, headaches. But we also know that for better or worse people react to the nominal prices, and seem to have some difficulty in recognizing just what the relative price changes actually are. The numbers, not the ratios, have some effect. But that should not really matter for making good economic decisions.
So the cheap knowledge world seem a bit like the uniform inflation world*. If a lot of people are reacting to the nominal “price” it’s probably not the right thing to do. So while in today’s world where we don’t really face what are not largely nominal versus relative type impacts and decision is the this Middle Earth AI world one more like that? If so then the skills we don’t have or don’t often use that help us recognize cases of nominal type changes rather than relative type changes maybe matter.
That’s all rather abstract as I don’t think the nominal-relative economic metaphor does a great job illustrating the point but it’s the best I can come up with right now.
Going with the knowledge is an input to everything so that AI world is one a abundance, time probably becomes more scarce. Or perhaps more correct, assessing what to do with our time becomes more problematic and expensive (more scarce?) because we find we now have many more, and perhaps new unexperienced, options to spend our time on.
With that type of problem understanding oneself seems to become more and more important. It’s hard to make a good decision about what to do if you don’t really understand deeply what it is that is important and of value to you. That may be counter-intuitive or it may be incorrect. With fewer options we still benefit greatly by understanding what really drives us personally. But I might also find that if I don’t have too many options the ones that will be better for me may well be more obvious and not required much soul-searching so I can shirk a bit regarding effort to think about who and what I am. Does the knowledge we’re talking about in an AI world reduce the costs there or is that a bit outside what an AI can even offer? Perhaps the skills and effort towards introspection become more valuable and in a very real sense more scarce in that middle AI world.
[* The intuition here is that in a monetary case, central banks are the source of money that enables most inflation. Injection effects and transmission paths impact the relative observance of the increasing prices. So that uniform inflation world just cannot exist for us. But cheap knowledge from AI is different. The AI might be central (and perhaps even restricted which breaks this thought) but access is open. Everyone has easy access to the knowledge now. This significantly changes the dynamics and I think flattens the case—so becomes much more like the uniform inflation case.]
Below is the LLM output.
ChatGTP edit and rewrite from the chat
When thinking about AI’s long-term effects, it helps to distinguish between a few different possible futures. For the purposes of this discussion, I find it useful to divide them into three broad categories:
1. The “If We Build It We All Die” world
2. Middle Earth
3. The Giskard & Olivaw world
The names come from literature, though I’m not claiming the futures would resemble those stories in detail.
Interestingly, the two extremes are the least interesting for practical purposes.
If the first scenario is correct—if sufficiently powerful AI leads to human extinction—then there is not much to say about life in an AI world. In that case there *is no* meaningful AI world for humans. The only world that matters is the pre-AI world we live in now. And realistically there is little for most individuals to do about it except continue living their lives. Preventing AI entirely seems implausible. Work on safety and alignment may improve outcomes, but ordinary people have limited direct influence over that trajectory.
The opposite extreme is what I’ll call the Giskard & Olivaw world, after the robots in Isaac Asimov’s novels. Those robots could read minds and predict the future with remarkable accuracy, and they used those abilities to subtly guide humanity away from civilization-ending disasters. Humans retained a large degree of freedom, but the worst outcomes were quietly prevented and in general people lived very material rich and easier lives from AI and robot. In the extreme one need not do anything to improve oneself and still enjoy a very comfortable living. (Perhaps one needs to have some self motivation and curiosity about things but the costs on being lazy is very low and perhaps many follow that bad incentive structure.)
In a world like that, AI would effectively manage humanity’s largest risks while enabling enormous abundance. For most people, life would largely continue as it does today—except materially better.
In both extreme scenarios, the actions of ordinary individuals matter relatively little. Either AI destroys us, or it solves our largest problems.
The interesting case lies between these extremes.
I’ll call this Middle Earth: a world where AI becomes extremely powerful but does not completely determine humanity’s fate. Humans still make choices, compete for status, build institutions, and shape their societies—but they do so within an environment profoundly altered by AI.
Thinking about life in such a world raises many questions. Two in particular seem especially interesting:
1. If knowledge becomes cheap, what becomes scarce?
2. If AI can do most of the thinking, which complementary human skills become more valuable?
These questions may help illuminate what life in a “Middle Earth” AI world might actually look like.
If Knowledge Becomes Cheap, What Becomes Scarce?
One natural starting point is economic intuition. If a production process becomes dramatically cheaper, then the goods produced by that process generally become cheaper as well.
If AI drastically reduces the cost of producing knowledge—analysis, explanations, designs, plans, code—then many things that depend on knowledge production should also become cheaper.
At first glance this suggests a world of broad abundance. But that observation alone isn’t very informative. A more useful question may be:
Which things remain scarce even when knowledge production becomes cheap?
Several possibilities come to mind.
Time
No matter how much knowledge AI can produce, humans still face the same constraint: 24 hours in a day.
In fact, AI may increase the scarcity of time. As more activities become accessible—because AI can help us perform them—the number of possible ways to spend time expands. That increases opportunity cost. Choosing how to spend time may become more difficult precisely because more options become feasible.
This suggests that understanding our own preferences—what we actually want to do—may become more valuable rather than less.
Attention
Attention may become scarce for related reasons.
If AI dramatically expands the set of things we *could* pay attention to—ideas, projects, entertainment, opportunities—then managing attention becomes harder.
There is already a hint of this dynamic in modern politics. Participating meaningfully as an informed citizen has become more difficult as the complexity of society increases. People often respond by delegating attention—to institutions, media, or trusted intermediaries.
An AI world might amplify this pattern. One possibility is that people increasingly defer judgment to AI systems. Another possibility is that managing one’s attention becomes an increasingly important personal skill.
Trust and Verification
If AI systems can produce convincing text, images, and analysis at very low cost, then verification may become more valuable.
Knowing whether something is authentic—whether a claim, piece of information, or analysis is reliable—could become an important scarce resource.
Reputation systems, trusted institutions, and mechanisms for verification might become more important rather than less.
Taste and Curation
When knowledge production is expensive, expertise often lies in creating answers.
When knowledge production becomes cheap, expertise may shift toward deciding which answers matter. [Note: I think this is a weak framing by the AI, which answers one seeks in a high-cost knowledge world certainly is a very important problem that must be solve, and is only a little easier if you have fewer alternatives to consider.]
In other words, taste and curation.
Choosing what is worth doing, what is worth building, or what problems are actually important may become more valuable than producing solutions to well-defined problems.
This may be related to what some people have called *context engineering*: framing problems in ways that allow powerful AI systems to produce useful outputs.
Status
Another possibility is status.
Historically, knowledge has often been a path to status. Experts, scholars, advisors, and professionals derive part of their status from possessing knowledge that others lack.
If AI systems give everyone easy access to sophisticated analysis and explanations, that particular path to status may weaken.
But humans rarely stop caring about status simply because one route to it disappears. If knowledge ceases to differentiate people, status may shift toward other dimensions.
Possibilities might include:
* judgment (knowing what questions to ask)
* taste (choosing worthwhile goals)
* character and reliability
* leadership and coordination
* physical or embodied achievements
If this is correct, then the social role of expertise may change significantly even if knowledge itself becomes abundant.
Which Human Skills Become More Valuable?
The second question follows naturally from the first.
If AI systems can perform much of the cognitive work currently done by humans—analysis, planning, explanation—then what human abilities become more valuable in that environment?
One candidate is problem framing.
If AI systems are powerful tools for solving problems, then much of the value may lie in identifying which problems to solve and how to formulate them. The ability to translate messy real-world situations into well-posed problems might become increasingly important.
Another candidate is judgment under uncertainty.
Even if AI systems can generate sophisticated analyses, humans may still need to decide which analyses to trust, which goals to pursue, and which tradeoffs are acceptable. These decisions often involve values rather than purely technical reasoning.
Coordination and leadership may also become more valuable. Many important problems require groups of people to align around shared goals. AI may assist with analysis and planning, but motivating and organizing humans may remain a distinctly human domain.
Finally, there may be increasing value in understanding humans themselves—their motivations, values, and preferences. If AI systems can handle many technical tasks, the limiting factor in many projects may become human agreement about what should be done. [Note, here again I think the AI missed some of what I was thinking. To me what might be at least as important, and certainly so in terms of our individual lives, is understanding ourselves. Having seen the post this morning about noticing your own state of feeling seems it is at least close to some of what I was actually thinking. But still the social interactions will matter too—so you end up with things like specializing in context engineering or the like.]
Countries which sell some territory to their neighbors undermine their ability to credibly commit to defending the rest of their territory.
I’m trying to come up with examples of that but the only one I can think of are both here in the USA, with other countries selling. France sold some of its territories to the USA (Louisiana Purchase) and Russia sold Alaska to the USA as well (not sure if there is a name for that transaction). But I don’t think either were signaling that they would not defend the remaining territory of their country.
Are these exceptions, given the geographic distances from political center or are they problems for the underlying claim?
[Note, this comment is specific to the quoted statement not really addressing the overall post. But if that bullet point has some value to the general argument in the post and the conclusions depend on it I think I need to wonder just how strong a case I think has been made.]
I agree that in the modern age we live in there has been something to that effect. Historically too I think, just going off fiction (which is often rather well researched). But there are also a lot of counter-examples there. Perhaps the question isn’t whether there is/has been some standard of “don’t go after the leaders of a country” but rather when and why one might.
Your point about having someone that can actually agree to the end of a conflict and tell their side to stop fighting could be valuable. At the same time the idea of disrupting all the command and control paths is highly valuable during a conflict—and if that is best accomplished by removing the leadership it might outweigh the value of having someone that can authoritatively issue “stop” orders.
I don’t know if this administration, or the Israelis, thought taking out Khamenei and his leadership council would accomplish that or not. It does seem that Iran has a reasonably robust political and military line of succession.
The other view that comes to mind is just the problem all governments have to deal with: factions. Whatever form of government is in place has to manage all the factions—no autocrat has ever been purely independent and able to do whatever that person wanted. They all manage the competing interests of others to remain in power. Perhaps taking out the leadership of a country shift the factional equilibrium towards a more favorable position (could be a difficult calculation if domestic knowledge about factional relationships is weak).
Kind of an old idea. Authur C. Clark’s old scifi book “Childhood’s End” has that as a premise (though in his case the aliens were our visual model for devils—wings, tails, horns, redish complexion...)
Does this model match with some long standing political models that look through the lens of concentrated and diversified interests? Votes are a very diverse bunch but doners, individually and within some fairly well defined groups, tend to have so very well defined interests driving their political involvement.
The fundamental purpose of democratic government is to handle the implementation of the policies that the population wants.
That’s probably a fairly common view, and I don’t quite disagree, but it seems somewhat naive or in the whole “governments and democratic institutions act in the public interest” camp of poli-sci.
While perhaps a bit difficult to fully separate from the suggested purpose, I do see democracy very much as a way of managing factional conflict view some more peaceful social mechanism that just brute force, winners get their way. Pure democracy might be similar but I think Constitutional type democracies do try to provide some base protection for the “loosing” side(s) while still promoting discussion and compromise over simple force.
Not sure how much that might shift the views and interpretations in this discussion but seems that if we start from a potentially partial or incorrect premise we’ll not find the conclusions that fruitful or insightful.
While more work than I would be interested in doing, I would think with existing online presence of newspapers, as well as national paper local coverage sections and online local news one could directly verify the claim of reduced coverage.
I do agree that over influence by national party line positions will push towards more polarization—and I would suggest poorer outcomes and policies locally. I’ve wondered why States don’t view out of state campaign funding in the same way the USA considered out of country (foreign) contributions given the diversity or subculture and economies within the 50 States. And while I suspect a lot of funding probably comes from the parties I would suspect a large amount also just comes from outside interests that operate more at the national level than locally and will likely support candidates on either side that will provide support for those outside interests regardless of overall Party position.
It does seem that we’re standing the old claim of all politics being local on its head in the 21st Century with the dominance of national party and its ability to control the local agenda and candidates. (This is more hypothesis than something I’ve establish for myself but certainly seems to fit the narrative about the current Republican party and some older grumbling from both parties in the past)
While I’m not surprised with the findings. From a quick search:
The Pentagon features over 30 distinct food service locations, including more than 20-24 various restaurants, fast-food chains, and cafes catering to its 23,000+ employees. The facility includes three main food courts—most notably the 875-seat Concourse Food Court—along with numerous individual kiosks, branded vendors (e.g., Starbucks, Subway, Taco Bell, Popeyes), and the Center Court Café.
When asking specifically about delivery:
Food orders can be delivered to the Pentagon, but not directly to offices; they must be screened at a remote facility or picked up by personnel at designated secure areas like the Pentagon Metro. Perishable items are generally prohibited from being delivered directly to the building, but staff frequently order from local spots during long shifts.
Key details regarding food deliveries to the Pentagon:
Delivery Procedures: All items must go through the Pentagon Remote Delivery Facility, where they are screened and inspected.
Pickup Location: Employees often meet delivery drivers at secure, accessible points outside the main building, such as the Metro.
Internal Options: The Pentagon contains its own food court with options like McDonald’s, Five Guys, Starbucks, and Subway, which are accessible to employees.
“Pentagon Pizza Theory”: Sudden surges in local, late-night pizza orders to the Pentagon have historically been noted as a potential, unofficial indicator of increased, high-stakes military activity.
But you do have the mention of the theory you’re debunking.
Seems like outside delivery is really complicated and time consuming (though I don’t know if the internal food halls can deliver to office door but seem like it might still be much quicker than leaving the building to meet the delivery person). Plus, a lot more than pizza can be delivered these days (but perhaps the “Pizza” in the name should not be taken literally).
As a side note, years pass when I working in the Intelligence field, when I first started reading some of the CIA’s classified documents and studies I was surprised by just how much of the source information was from general, publicly available and unclassified information.
I heard similar story to that of the Alchian story many years back. Ph.D. candidates dissertation was about the risks to the power grid (for get if it was just electric or if other distribution networks were considered) which pointed out a number of ways an adversary could disrupt and disable the gird. If got published as is usual and then got noticed, classified and perhaps is no longer even searchable in the dissertation archives (if so it’s highly redacted I suspect).
I would lump such cases into the bucket of info hazards.
Source: I made it up!
LOL—HSI hallucinations?
No worries, we all make some mistakes with our assumptions at times and forget to double check every fact. I think it was a minor, and largely trivial error to the larger point. i just wasn’t sure and did a quick google check (so had Gemini answering, but I’ve seen it hallucinate enough to not take it as certian) but that can easily miss some finer points.
I think it is very easy to read into a post like this and essentially fall into the very behavior you’re ascribing to the author. Regardless of the OP’s view, the post is not naming names but is very topical. It’s worth considering.
But I do agree that whoever is getting told their actions are unconstitutional will typically see that as an attack if they truly believe they are doing something within their powers. But I also suspect any that refuse to accept a Supreme Court ruling never cared about the Constitution or the checks and balances that were implemented in the Constitution. It’s simply a case of someone refusing to accept they are not a good judge of their own case which is pretty much at the heart of any rule of law society.
Generally I do agree but given the current Secretary and some of the appointees I would question how strong that “magic” might be. Do you think some Generals or Armies/Divisions would rise up to oppose some core units that are aligned with such a President/Administration? At what point might they do that—the first case of S.C ruling something unconstitutional but the President continues? Or is it more likely they stay in their place and we just keep sliding down the slippery slope?
Could you point to your source for the claim about the Marshall’s Service falling under the Judicial Branch of the government? My understanding is that his belongs to the DoJ so would fall under the Executive Branch.
Separately, I do wonder if we’re speculating about cases that might be labeled in the gray area of the incomplete contract (Constitution), I wonder what might happen if States claim their right to call out their National Guard and perhaps even the more general malitia (interesting if that could be State draft or purely voluntary—i.e., giving military arms to able bodied men), President calls out military, and then Congress tell all the military their pay is frozen—meaning not only DoD and it’s branches but the service men and any contractors—what might happen.
If Treasury just says go ef’ yourself Congress and cuts the checks not much hope. But what if the banking system refused to honor them given the S.C and Congress’s rulings?
Seems like at this point we’re talking about some serious brinkmanship, and to be honest I would really prefer not to live in such times (like many actually get a choice here) given the potential for escalation to all out civil war. But I do wonder if perhaps the bigger checks here might not be the informal checks and balances. It seems that perhaps in the scenario envisions (as I understand it—a serious breakdown in government processes and checks-balance among the branches) even applying any presumably defined law or division of power is very problematic—which is a bit different from saying the other branches should not try.
But I would also think (as seems true today) you simply don’t get to the situation suggested without the government processes and functions related to checks and balances already having deteriorated to the point of disfunction—which I would suggest is the case and has been developing for many years -- 50? 100? We’ve seen a lot of political structure innovation that is not quite consistent with the Constitution (Congressional delegation of powers, partnership among the branches for efficiency reasons, party domination that serves to eliminate the assumed checks and balances...).
I don’t quite like the framing “Don’t Exist”. I suspect a lot will depend on the specific context you need to use the term and the point one wants to make. Should I make a blanket statement “Murder Does Not Exists” simply because across cultures and national laws there is not 100% agreement on what defines a killing as murder or not murder? What about many of the technical standards that end up producing incompatible implementations that break interoperability? Are there really no standards?
I am probably more sympathetic to the last claim that to the one about murder or treaties.
I’m not sure there is the trap you claim. I do agree that enforcement, which does mean applying some level of force or power in some cases, is needed. Property rights, or any rights generally, don’t just get respect and adherence from all. It’s complicated but I do think one might suggest property rights emerging as preferred to just relying brute force and power as the determinant. Both Demsetz and Olson have some good work that suggests property rights and respect of property rights arises as much (more?) from desire and incentives to escape from the conflict and application of force/projection of power.
How well human history and social/cultural evolution might apply to any ASI futures is a big unknown, but for that very reason I tend to think projecting ASI behavior from human history and experience might itself be a bit problematic.
I had a bit of the same reaction (logic being many loose their jobs, income craters, good demand craters, corp earning crater stock price higher????). But I kind of see it from a I don’t have a good handle on AI and equity market levels in the future so maybe stick to a strategy that historically makes sense.
I would only add some slight shifts to the suggestion. While it is also an open question for the average investor as to buy-hold versus timing the market works well, I do think most here can think though well enough to consider timing for the option allocation. Simple mean-reversion type entry points might increase the odd tempered by where one things the overall market is in the cycle.
I wonder if situations like the Cuban missile crisis are good examples for your position. But then I also wonder if that (I think apparently worried but calm about the world ending in a nuclear conflict) isn’t contrasted by the claims about the mass hysteria after the radio broadcast of Well’s War of the Worlds.
You really get asked that? Wow.
I also have always found the “the world might end tonight/tomorrow/next week” stories with people running around madly doing all the things they never would have otherwise a bit stretched. But then mob mentalities are not rational so I don’t really try to make too much sense of them
I suppose that would be my first approach to coping with the world ending—just keep my eye open to external madness and perhaps put some space between me and large population or something.
Since I generally don’t believe anyone has ever promised me tomorrow, the end of the world case does seem to fit into the “what has that got to do with me” view. I’d much rather live my life on my own terms than concede I have been living according to other people terms for some reason and feel the end of the world somehow free me from some constraints or something.
I would suggest considering both their naval build up and their position on the South China sea and its historical use of trade/economic and political pressure to advance it’s influence and boarders, with military might as a backup for and enforcement mechanism “By all accounts” is a bit far.