The other day I was wondering to myself what might be important to be thinking about. I ended up posing a query to ChatGTP related to that to see what popped out. Two of the questions most interesting to me, and somewhat framed a new meta question for me. Those two questions were:
If knowledge becomes cheap, what becomes scarce?
If AI can do most of the thinking, which complementary human skills become more valuable?
Both questions relate to the deployment and integration of AI and what that might mean for people. The meta-question I had was “What to do in an AI world?” Hardly a new question but it got me thinking about how to try framing any more targeted questions that might yield some actual information or potential plans. I decided I could put the big question into 3 separate buckets, which I labelled base on some generally well known fictions (maybe the first shouldn’t be called fiction but....)
The “If we build it we all die” world
The Middle Earth world, and
The Giskard & Olivaw world
I found, at least for me, 1 and 3 were actually quite boring and generally implied I make, and have no need to make, any real changes in how I approach living my life. The Middle Earth world not only seems to beg answers to the the above two questions.
I jotted down some thought but hardly in an organized way and then though I might post to LW to get some reactions and additional insights/thought if any cared to voice them. Part of that interaction was ChatGTP offering to help refine and reorganize. With a small amount of back and forth I got to what is offered below. I chose to let the AI write it as I really do suck at writing. I don’t think ChatGTP fully grasped some of what I was aiming at. I may continue asking myself, and trying to answer, more on these questions. Dropping some of the suggested areas and maybe adding others.
In any case:
What to Do in an AI World?
Below is the LLM “slop”, collapsed for those that hate seeing such things. While some of what was generated works well for me and does reflect what I was thinking about some is a bit off the mark. In part I think that is from the desire for some brevity and attempt to elicit other thoughts rather than really push my musing of views at the start.
However, starting with the two question, when knowledge get cheap what gets scarce and when AI can do a lot of thinking what what becomes valuable, I started from a largely economic production model. Things are scarce because few are produced relative to other things and to demand for the item. If some production is used in producing other items, as many are, as it become cheaper to produce more gets produced so one should be able to conclude that those production of the items depending on that now cheaper one will also increase. There is complementarity in production.
But “knowledge” (that’s a tricky term I think) is a bit different from something like steel or leather. What goods don’t have knowledge as a rather important input? So if knowledge is cheap one might say nothing is scarce. A bit simplistic, and incorrect, but also not completely wrong so the question itself needs a bit more examination.
I might put that in some inflation framed case. If all prices are rising at the same rate, then inflation doesn’t matter—or at least the claims are generally it doesn’t matter. In the real world inflation doesn’t manifest itself that was so inflationary periods always impact relative prices, disturb existing equilibria and largely case everyone, not just the lenders, headaches. But we also know that for better or worse people react to the nominal prices, and seem to have some difficulty in recognizing just what the relative price changes actually are. The numbers, not the ratios, have some effect. But that should not really matter for making good economic decisions.
So the cheap knowledge world seem a bit like the uniform inflation world*. If a lot of people are reacting to the nominal “price” it’s probably not the right thing to do. So while in today’s world where we don’t really face what are not largely nominal versus relative type impacts and decision is the this Middle Earth AI world one more like that? If so then the skills we don’t have or don’t often use that help us recognize cases of nominal type changes rather than relative type changes maybe matter.
That’s all rather abstract as I don’t think the nominal-relative economic metaphor does a great job illustrating the point but it’s the best I can come up with right now.
Going with the knowledge is an input to everything so that AI world is one a abundance, time probably becomes more scarce. Or perhaps more correct, assessing what to do with our time becomes more problematic and expensive (more scarce?) because we find we now have many more, and perhaps new unexperienced, options to spend our time on.
With that type of problem understanding oneself seems to become more and more important. It’s hard to make a good decision about what to do if you don’t really understand deeply what it is that is important and of value to you. That may be counter-intuitive or it may be incorrect. With fewer options we still benefit greatly by understanding what really drives us personally. But I might also find that if I don’t have too many options the ones that will be better for me may well be more obvious and not required much soul-searching so I can shirk a bit regarding effort to think about who and what I am. Does the knowledge we’re talking about in an AI world reduce the costs there or is that a bit outside what an AI can even offer? Perhaps the skills and effort towards introspection become more valuable and in a very real sense more scarce in that middle AI world.
[* The intuition here is that in a monetary case, central banks are the source of money that enables most inflation. Injection effects and transmission paths impact the relative observance of the increasing prices. So that uniform inflation world just cannot exist for us. But cheap knowledge from AI is different. The AI might be central (and perhaps even restricted which breaks this thought) but access is open. Everyone has easy access to the knowledge now. This significantly changes the dynamics and I think flattens the case—so becomes much more like the uniform inflation case.]
Below is the LLM output.
ChatGTP edit and rewrite from the chat
When thinking about AI’s long-term effects, it helps to distinguish between a few different possible futures. For the purposes of this discussion, I find it useful to divide them into three broad categories:
1. The “If We Build It We All Die” world
2. Middle Earth
3. The Giskard & Olivaw world
The names come from literature, though I’m not claiming the futures would resemble those stories in detail.
Interestingly, the two extremes are the least interesting for practical purposes.
If the first scenario is correct—if sufficiently powerful AI leads to human extinction—then there is not much to say about life in an AI world. In that case there *is no* meaningful AI world for humans. The only world that matters is the pre-AI world we live in now. And realistically there is little for most individuals to do about it except continue living their lives. Preventing AI entirely seems implausible. Work on safety and alignment may improve outcomes, but ordinary people have limited direct influence over that trajectory.
The opposite extreme is what I’ll call the Giskard & Olivaw world, after the robots in Isaac Asimov’s novels. Those robots could read minds and predict the future with remarkable accuracy, and they used those abilities to subtly guide humanity away from civilization-ending disasters. Humans retained a large degree of freedom, but the worst outcomes were quietly prevented and in general people lived very material rich and easier lives from AI and robot. In the extreme one need not do anything to improve oneself and still enjoy a very comfortable living. (Perhaps one needs to have some self motivation and curiosity about things but the costs on being lazy is very low and perhaps many follow that bad incentive structure.)
In a world like that, AI would effectively manage humanity’s largest risks while enabling enormous abundance. For most people, life would largely continue as it does today—except materially better.
In both extreme scenarios, the actions of ordinary individuals matter relatively little. Either AI destroys us, or it solves our largest problems.
The interesting case lies between these extremes.
I’ll call this Middle Earth: a world where AI becomes extremely powerful but does not completely determine humanity’s fate. Humans still make choices, compete for status, build institutions, and shape their societies—but they do so within an environment profoundly altered by AI.
Thinking about life in such a world raises many questions. Two in particular seem especially interesting:
1. If knowledge becomes cheap, what becomes scarce?
2. If AI can do most of the thinking, which complementary human skills become more valuable?
These questions may help illuminate what life in a “Middle Earth” AI world might actually look like.
If Knowledge Becomes Cheap, What Becomes Scarce?
One natural starting point is economic intuition. If a production process becomes dramatically cheaper, then the goods produced by that process generally become cheaper as well.
If AI drastically reduces the cost of producing knowledge—analysis, explanations, designs, plans, code—then many things that depend on knowledge production should also become cheaper.
At first glance this suggests a world of broad abundance. But that observation alone isn’t very informative. A more useful question may be:
Which things remain scarce even when knowledge production becomes cheap?
Several possibilities come to mind.
Time
No matter how much knowledge AI can produce, humans still face the same constraint: 24 hours in a day.
In fact, AI may increase the scarcity of time. As more activities become accessible—because AI can help us perform them—the number of possible ways to spend time expands. That increases opportunity cost. Choosing how to spend time may become more difficult precisely because more options become feasible.
This suggests that understanding our own preferences—what we actually want to do—may become more valuable rather than less.
Attention
Attention may become scarce for related reasons.
If AI dramatically expands the set of things we *could* pay attention to—ideas, projects, entertainment, opportunities—then managing attention becomes harder.
There is already a hint of this dynamic in modern politics. Participating meaningfully as an informed citizen has become more difficult as the complexity of society increases. People often respond by delegating attention—to institutions, media, or trusted intermediaries.
An AI world might amplify this pattern. One possibility is that people increasingly defer judgment to AI systems. Another possibility is that managing one’s attention becomes an increasingly important personal skill.
Trust and Verification
If AI systems can produce convincing text, images, and analysis at very low cost, then verification may become more valuable.
Knowing whether something is authentic—whether a claim, piece of information, or analysis is reliable—could become an important scarce resource.
Reputation systems, trusted institutions, and mechanisms for verification might become more important rather than less.
Taste and Curation
When knowledge production is expensive, expertise often lies in creating answers.
When knowledge production becomes cheap, expertise may shift toward deciding which answers matter. [Note: I think this is a weak framing by the AI, which answers one seeks in a high-cost knowledge world certainly is a very important problem that must be solve, and is only a little easier if you have fewer alternatives to consider.]
In other words, taste and curation.
Choosing what is worth doing, what is worth building, or what problems are actually important may become more valuable than producing solutions to well-defined problems.
This may be related to what some people have called *context engineering*: framing problems in ways that allow powerful AI systems to produce useful outputs.
Status
Another possibility is status.
Historically, knowledge has often been a path to status. Experts, scholars, advisors, and professionals derive part of their status from possessing knowledge that others lack.
If AI systems give everyone easy access to sophisticated analysis and explanations, that particular path to status may weaken.
But humans rarely stop caring about status simply because one route to it disappears. If knowledge ceases to differentiate people, status may shift toward other dimensions.
Possibilities might include:
* judgment (knowing what questions to ask)
* taste (choosing worthwhile goals)
* character and reliability
* leadership and coordination
* physical or embodied achievements
If this is correct, then the social role of expertise may change significantly even if knowledge itself becomes abundant.
Which Human Skills Become More Valuable?
The second question follows naturally from the first.
If AI systems can perform much of the cognitive work currently done by humans—analysis, planning, explanation—then what human abilities become more valuable in that environment?
One candidate is problem framing.
If AI systems are powerful tools for solving problems, then much of the value may lie in identifying which problems to solve and how to formulate them. The ability to translate messy real-world situations into well-posed problems might become increasingly important.
Another candidate is judgment under uncertainty.
Even if AI systems can generate sophisticated analyses, humans may still need to decide which analyses to trust, which goals to pursue, and which tradeoffs are acceptable. These decisions often involve values rather than purely technical reasoning.
Coordination and leadership may also become more valuable. Many important problems require groups of people to align around shared goals. AI may assist with analysis and planning, but motivating and organizing humans may remain a distinctly human domain.
Finally, there may be increasing value in understanding humans themselves—their motivations, values, and preferences. If AI systems can handle many technical tasks, the limiting factor in many projects may become human agreement about what should be done. [Note, here again I think the AI missed some of what I was thinking. To me what might be at least as important, and certainly so in terms of our individual lives, is understanding ourselves. Having seen the post this morning about noticing your own state of feeling seems it is at least close to some of what I was actually thinking. But still the social interactions will matter too—so you end up with things like specializing in context engineering or the like.]
The other day I was wondering to myself what might be important to be thinking about. I ended up posing a query to ChatGTP related to that to see what popped out.
(not only) in Soviet Russia, the LLM prompts you to do deep research
alternatively: if you prompt long an LLM, the LLM also prompts you
I’d have to imagine that, so long as humans remain mostly-human, the universal status multipliers like willingness to struggle, physical beauty/aptitude, and social grace will determine where people stand among others. Assortative mating will likely continue, too, though the recent trend of wealthy individuals buying eggs (and, less frequently, sperm) in lieu of courtship might affect evolutionary dynamics, there, especially as biotech removes the bottleneck on egg donation such that even one willing donor can clone her eggs as many times as needed. As always, ‘who mates with who’ seems likely to be something people still care about, and that’s likely to be determined by status.
I can imagine a world where the top echelon of humanity essentially live like the ancient Greeks in their most idealized form, the bottom echelon completely throw themselves into escapism, and the middle pick a zygote from willing members of the top echelon and become single parents.
One caveat is that I have no idea how groups that want lots of children will be handled. It takes fewer generations than humanity has already had for a relatively reasonable 5.0 birthrate to explode into more people than the universe has atoms. Modern socialization decreases birthrate on mean, but some portion of the drive to reproduce is due to genetic inclinations, and a world in which everyone is rich enough to have as many kids as they want doesn’t seem sustainable on those grounds. Maybe everyone gets a transferrable budget of one child.
The other day I was wondering to myself what might be important to be thinking about. I ended up posing a query to ChatGTP related to that to see what popped out. Two of the questions most interesting to me, and somewhat framed a new meta question for me. Those two questions were:
If knowledge becomes cheap, what becomes scarce?
If AI can do most of the thinking, which complementary human skills become more valuable?
Both questions relate to the deployment and integration of AI and what that might mean for people. The meta-question I had was “What to do in an AI world?” Hardly a new question but it got me thinking about how to try framing any more targeted questions that might yield some actual information or potential plans. I decided I could put the big question into 3 separate buckets, which I labelled base on some generally well known fictions (maybe the first shouldn’t be called fiction but....)
The “If we build it we all die” world
The Middle Earth world, and
The Giskard & Olivaw world
I found, at least for me, 1 and 3 were actually quite boring and generally implied I make, and have no need to make, any real changes in how I approach living my life. The Middle Earth world not only seems to beg answers to the the above two questions.
I jotted down some thought but hardly in an organized way and then though I might post to LW to get some reactions and additional insights/thought if any cared to voice them. Part of that interaction was ChatGTP offering to help refine and reorganize. With a small amount of back and forth I got to what is offered below. I chose to let the AI write it as I really do suck at writing. I don’t think ChatGTP fully grasped some of what I was aiming at. I may continue asking myself, and trying to answer, more on these questions. Dropping some of the suggested areas and maybe adding others.
In any case:
What to Do in an AI World?
Below is the LLM “slop”, collapsed for those that hate seeing such things. While some of what was generated works well for me and does reflect what I was thinking about some is a bit off the mark. In part I think that is from the desire for some brevity and attempt to elicit other thoughts rather than really push my musing of views at the start.
However, starting with the two question, when knowledge get cheap what gets scarce and when AI can do a lot of thinking what what becomes valuable, I started from a largely economic production model. Things are scarce because few are produced relative to other things and to demand for the item. If some production is used in producing other items, as many are, as it become cheaper to produce more gets produced so one should be able to conclude that those production of the items depending on that now cheaper one will also increase. There is complementarity in production.
But “knowledge” (that’s a tricky term I think) is a bit different from something like steel or leather. What goods don’t have knowledge as a rather important input? So if knowledge is cheap one might say nothing is scarce. A bit simplistic, and incorrect, but also not completely wrong so the question itself needs a bit more examination.
I might put that in some inflation framed case. If all prices are rising at the same rate, then inflation doesn’t matter—or at least the claims are generally it doesn’t matter. In the real world inflation doesn’t manifest itself that was so inflationary periods always impact relative prices, disturb existing equilibria and largely case everyone, not just the lenders, headaches. But we also know that for better or worse people react to the nominal prices, and seem to have some difficulty in recognizing just what the relative price changes actually are. The numbers, not the ratios, have some effect. But that should not really matter for making good economic decisions.
So the cheap knowledge world seem a bit like the uniform inflation world*. If a lot of people are reacting to the nominal “price” it’s probably not the right thing to do. So while in today’s world where we don’t really face what are not largely nominal versus relative type impacts and decision is the this Middle Earth AI world one more like that? If so then the skills we don’t have or don’t often use that help us recognize cases of nominal type changes rather than relative type changes maybe matter.
That’s all rather abstract as I don’t think the nominal-relative economic metaphor does a great job illustrating the point but it’s the best I can come up with right now.
Going with the knowledge is an input to everything so that AI world is one a abundance, time probably becomes more scarce. Or perhaps more correct, assessing what to do with our time becomes more problematic and expensive (more scarce?) because we find we now have many more, and perhaps new unexperienced, options to spend our time on.
With that type of problem understanding oneself seems to become more and more important. It’s hard to make a good decision about what to do if you don’t really understand deeply what it is that is important and of value to you. That may be counter-intuitive or it may be incorrect. With fewer options we still benefit greatly by understanding what really drives us personally. But I might also find that if I don’t have too many options the ones that will be better for me may well be more obvious and not required much soul-searching so I can shirk a bit regarding effort to think about who and what I am. Does the knowledge we’re talking about in an AI world reduce the costs there or is that a bit outside what an AI can even offer? Perhaps the skills and effort towards introspection become more valuable and in a very real sense more scarce in that middle AI world.
[* The intuition here is that in a monetary case, central banks are the source of money that enables most inflation. Injection effects and transmission paths impact the relative observance of the increasing prices. So that uniform inflation world just cannot exist for us. But cheap knowledge from AI is different. The AI might be central (and perhaps even restricted which breaks this thought) but access is open. Everyone has easy access to the knowledge now. This significantly changes the dynamics and I think flattens the case—so becomes much more like the uniform inflation case.]
Below is the LLM output.
ChatGTP edit and rewrite from the chat
When thinking about AI’s long-term effects, it helps to distinguish between a few different possible futures. For the purposes of this discussion, I find it useful to divide them into three broad categories:
1. The “If We Build It We All Die” world
2. Middle Earth
3. The Giskard & Olivaw world
The names come from literature, though I’m not claiming the futures would resemble those stories in detail.
Interestingly, the two extremes are the least interesting for practical purposes.
If the first scenario is correct—if sufficiently powerful AI leads to human extinction—then there is not much to say about life in an AI world. In that case there *is no* meaningful AI world for humans. The only world that matters is the pre-AI world we live in now. And realistically there is little for most individuals to do about it except continue living their lives. Preventing AI entirely seems implausible. Work on safety and alignment may improve outcomes, but ordinary people have limited direct influence over that trajectory.
The opposite extreme is what I’ll call the Giskard & Olivaw world, after the robots in Isaac Asimov’s novels. Those robots could read minds and predict the future with remarkable accuracy, and they used those abilities to subtly guide humanity away from civilization-ending disasters. Humans retained a large degree of freedom, but the worst outcomes were quietly prevented and in general people lived very material rich and easier lives from AI and robot. In the extreme one need not do anything to improve oneself and still enjoy a very comfortable living. (Perhaps one needs to have some self motivation and curiosity about things but the costs on being lazy is very low and perhaps many follow that bad incentive structure.)
In a world like that, AI would effectively manage humanity’s largest risks while enabling enormous abundance. For most people, life would largely continue as it does today—except materially better.
In both extreme scenarios, the actions of ordinary individuals matter relatively little. Either AI destroys us, or it solves our largest problems.
The interesting case lies between these extremes.
I’ll call this Middle Earth: a world where AI becomes extremely powerful but does not completely determine humanity’s fate. Humans still make choices, compete for status, build institutions, and shape their societies—but they do so within an environment profoundly altered by AI.
Thinking about life in such a world raises many questions. Two in particular seem especially interesting:
1. If knowledge becomes cheap, what becomes scarce?
2. If AI can do most of the thinking, which complementary human skills become more valuable?
These questions may help illuminate what life in a “Middle Earth” AI world might actually look like.
If Knowledge Becomes Cheap, What Becomes Scarce?
One natural starting point is economic intuition. If a production process becomes dramatically cheaper, then the goods produced by that process generally become cheaper as well.
If AI drastically reduces the cost of producing knowledge—analysis, explanations, designs, plans, code—then many things that depend on knowledge production should also become cheaper.
At first glance this suggests a world of broad abundance. But that observation alone isn’t very informative. A more useful question may be:
Which things remain scarce even when knowledge production becomes cheap?
Several possibilities come to mind.
Time
No matter how much knowledge AI can produce, humans still face the same constraint: 24 hours in a day.
In fact, AI may increase the scarcity of time. As more activities become accessible—because AI can help us perform them—the number of possible ways to spend time expands. That increases opportunity cost. Choosing how to spend time may become more difficult precisely because more options become feasible.
This suggests that understanding our own preferences—what we actually want to do—may become more valuable rather than less.
Attention
Attention may become scarce for related reasons.
If AI dramatically expands the set of things we *could* pay attention to—ideas, projects, entertainment, opportunities—then managing attention becomes harder.
There is already a hint of this dynamic in modern politics. Participating meaningfully as an informed citizen has become more difficult as the complexity of society increases. People often respond by delegating attention—to institutions, media, or trusted intermediaries.
An AI world might amplify this pattern. One possibility is that people increasingly defer judgment to AI systems. Another possibility is that managing one’s attention becomes an increasingly important personal skill.
Trust and Verification
If AI systems can produce convincing text, images, and analysis at very low cost, then verification may become more valuable.
Knowing whether something is authentic—whether a claim, piece of information, or analysis is reliable—could become an important scarce resource.
Reputation systems, trusted institutions, and mechanisms for verification might become more important rather than less.
Taste and Curation
When knowledge production is expensive, expertise often lies in creating answers.
When knowledge production becomes cheap, expertise may shift toward deciding which answers matter. [Note: I think this is a weak framing by the AI, which answers one seeks in a high-cost knowledge world certainly is a very important problem that must be solve, and is only a little easier if you have fewer alternatives to consider.]
In other words, taste and curation.
Choosing what is worth doing, what is worth building, or what problems are actually important may become more valuable than producing solutions to well-defined problems.
This may be related to what some people have called *context engineering*: framing problems in ways that allow powerful AI systems to produce useful outputs.
Status
Another possibility is status.
Historically, knowledge has often been a path to status. Experts, scholars, advisors, and professionals derive part of their status from possessing knowledge that others lack.
If AI systems give everyone easy access to sophisticated analysis and explanations, that particular path to status may weaken.
But humans rarely stop caring about status simply because one route to it disappears. If knowledge ceases to differentiate people, status may shift toward other dimensions.
Possibilities might include:
* judgment (knowing what questions to ask)
* taste (choosing worthwhile goals)
* character and reliability
* leadership and coordination
* physical or embodied achievements
If this is correct, then the social role of expertise may change significantly even if knowledge itself becomes abundant.
Which Human Skills Become More Valuable?
The second question follows naturally from the first.
If AI systems can perform much of the cognitive work currently done by humans—analysis, planning, explanation—then what human abilities become more valuable in that environment?
One candidate is problem framing.
If AI systems are powerful tools for solving problems, then much of the value may lie in identifying which problems to solve and how to formulate them. The ability to translate messy real-world situations into well-posed problems might become increasingly important.
Another candidate is judgment under uncertainty.
Even if AI systems can generate sophisticated analyses, humans may still need to decide which analyses to trust, which goals to pursue, and which tradeoffs are acceptable. These decisions often involve values rather than purely technical reasoning.
Coordination and leadership may also become more valuable. Many important problems require groups of people to align around shared goals. AI may assist with analysis and planning, but motivating and organizing humans may remain a distinctly human domain.
Finally, there may be increasing value in understanding humans themselves—their motivations, values, and preferences. If AI systems can handle many technical tasks, the limiting factor in many projects may become human agreement about what should be done. [Note, here again I think the AI missed some of what I was thinking. To me what might be at least as important, and certainly so in terms of our individual lives, is understanding ourselves. Having seen the post this morning about noticing your own state of feeling seems it is at least close to some of what I was actually thinking. But still the social interactions will matter too—so you end up with things like specializing in context engineering or the like.]
@jmh please avail yourself of the new LLM content block (or a collapsible section) for the LLM output.
(not only) in Soviet Russia, the LLM prompts you to do deep research
alternatively: if you prompt long an LLM, the LLM also prompts you
I’d have to imagine that, so long as humans remain mostly-human, the universal status multipliers like willingness to struggle, physical beauty/aptitude, and social grace will determine where people stand among others. Assortative mating will likely continue, too, though the recent trend of wealthy individuals buying eggs (and, less frequently, sperm) in lieu of courtship might affect evolutionary dynamics, there, especially as biotech removes the bottleneck on egg donation such that even one willing donor can clone her eggs as many times as needed. As always, ‘who mates with who’ seems likely to be something people still care about, and that’s likely to be determined by status.
I can imagine a world where the top echelon of humanity essentially live like the ancient Greeks in their most idealized form, the bottom echelon completely throw themselves into escapism, and the middle pick a zygote from willing members of the top echelon and become single parents.
One caveat is that I have no idea how groups that want lots of children will be handled. It takes fewer generations than humanity has already had for a relatively reasonable 5.0 birthrate to explode into more people than the universe has atoms. Modern socialization decreases birthrate on mean, but some portion of the drive to reproduce is due to genetic inclinations, and a world in which everyone is rich enough to have as many kids as they want doesn’t seem sustainable on those grounds. Maybe everyone gets a transferrable budget of one child.
I suspect some have seen this argument already but thought it was an interesting point of view on an aspect of AI that was new to me.
The Platonic Case Against AI Slop