The question is why should we care about slithy toves? How high is the utility of protecting them?
You need to answer those questions to get me to care about slithy toves.
And besides, people won’t stand for it – slithy toves with unwhiffled tulgey wood are a part of our way of life.
In general culture changes when there a need for the change. You don’t have to worry to much about technology being “socially acceptable” to prevent day values.
Prof. Bandersnatch: Of course! Prof. Jabberwock assures me the singularity will be here around tea-time on Tuesday. That is, if we roll up our sleeves and don’t waste time with trivialities like your tove issue.
I don’t think that’s the case. As far as I remember from the last survey the average LW participant predicts singularity after 2100. I see very few people arguing that we shouldn’t fight aging because we will have singularity before it matters.
But my natural inclination is to take the Jubjub view. I think the chances of a basically business-as-usual future for the next 200 or 300 years are not epsilon
It depends on what you mean with business-as-usual. In general history shows us that lot of changes do happen and things don’t stay constant.
The question is why should we care about slithy toves? How high is the utility of protecting them? You need to answer those questions to get me to care about slithy toves.
In my parable, the two scientists agree that slithiness is important. If I were to convince you of it we would of course have to exit the parable and discuss some particular real world problem on the merits.
It depends on what you mean with business-as-usual.
Which in turn depends on the particular Jubjub problem we are discussing. If it’s global warming, for example, then developments in energy technology will be important.
Which in turn depends on the particular Jubjub problem we are discussing. If it’s global warming, for example, then developments in energy technology will be important.
With business-as-usual you mean that we should plan with the cost of solar energy continue to halve every 7 years?
I don’t have the expertise to predict anything of interest about future developments in solar technology. My general inclination is simply that we should have plans that do not lead to disaster if hoped-for technological advances fail to materialize. If we could make our civilization robust enough that it could continue to function for an indefinite time without any significant technological advances, that would be awesome.
A robust thing doesn’t change when you exert pressure until you exert enough pressure to break it. Resiliant system do change when you apply pressure but they don’t break.
Robust things tend to have the habit of breaking in awful ways. Resilience is a better for designing system that you want to survive.
I don’t think the concern of making society work in a scenario without significant technological advances is pressing. We had a lot of significant technological advances in the last 100 years and even if Peter Thiel is right and we aren’t doing much innovation at the moment we still do change things.
I makes much more sense to focus on surviving scenario’s with significant technological advances.
It makes sense to avoid better society on a single technological change, but doing future planning with expecting no technological change is not very helpful.
The distinction you are making between robustness and resilience was not previously familiar to me but seems useful. Thank you.
Obviously, “no significant technological advances” is a basically impossible scenario. I just mean it as a baseline. If you’re able to handle techno-stagnation in all domains you’re able to handle any permutation of stagnating domains.
I think the distinction is quite important. People frequently centralize systems to make them more robust. To big to fail banks are more robust than smaller banks.
On the other hand they don’t provide a resilience. If one breaks down your screwed.
Italy political system isn’t as robust as the system of Saudi Arabia but probably more resilient.
There are often cases where systems get more robust if you reduce diversity but that also reduces resilience.
If you’re able to handle techno-stagnation in all domains you’re able to handle any permutation of stagnating domains.
You don’t. If technology A posits risk X and you need technology B to prevent risk X you are screwed in a world with A and not B but okay in a world without A and B.
When doing future planning it’s better to take a bunch of different scenarios of how the future could look like and see what your proposals do in each of those than to take the status quo as a scenario.
they don’t provide a resilience. If one breaks down your screwed.
Everything can be broken. It’s a misleading approach to think of robust systems as breakable and resilient systems as not breakable.
Both kinds of systems will break with sufficient damage. Ceteris paribus you can’t even say which one will break first. The difference is basically in how they deal with incoming force: the robust systems will ignore it and resilient systems will attempt to adjust to it. But without looking at specific circumstances you can’t tell beforehand which kind will be able to survive longer or under more severe stress.
There is also the related concept of graceful degradation, by the way.
In general culture changes when there a need for the change.
That’s not at all clear, i.e., there isn’t a general optimization process that optimizes culture for what’s needed. There’s memetic evolution, but that has the usual problems. In particular states and even entire civilizations have collapsed in the past.
In a case like using genetic engineering to produce superior humans there are pressures. If you have a few people doing it and they get benefits there cultural pressure for other people to also want the benefits.
The question is why should we care about slithy toves? How high is the utility of protecting them?
You need to answer those questions to get me to care about slithy toves.
In general culture changes when there a need for the change. You don’t have to worry to much about technology being “socially acceptable” to prevent day values.
I don’t think that’s the case. As far as I remember from the last survey the average LW participant predicts singularity after 2100. I see very few people arguing that we shouldn’t fight aging because we will have singularity before it matters.
It depends on what you mean with business-as-usual. In general history shows us that lot of changes do happen and things don’t stay constant.
In my parable, the two scientists agree that slithiness is important. If I were to convince you of it we would of course have to exit the parable and discuss some particular real world problem on the merits.
Which in turn depends on the particular Jubjub problem we are discussing. If it’s global warming, for example, then developments in energy technology will be important.
With business-as-usual you mean that we should plan with the cost of solar energy continue to halve every 7 years?
I don’t have the expertise to predict anything of interest about future developments in solar technology. My general inclination is simply that we should have plans that do not lead to disaster if hoped-for technological advances fail to materialize. If we could make our civilization robust enough that it could continue to function for an indefinite time without any significant technological advances, that would be awesome.
A robust thing doesn’t change when you exert pressure until you exert enough pressure to break it. Resiliant system do change when you apply pressure but they don’t break.
Robust things tend to have the habit of breaking in awful ways. Resilience is a better for designing system that you want to survive.
I don’t think the concern of making society work in a scenario without significant technological advances is pressing. We had a lot of significant technological advances in the last 100 years and even if Peter Thiel is right and we aren’t doing much innovation at the moment we still do change things. I makes much more sense to focus on surviving scenario’s with significant technological advances.
It makes sense to avoid better society on a single technological change, but doing future planning with expecting no technological change is not very helpful.
The distinction you are making between robustness and resilience was not previously familiar to me but seems useful. Thank you.
Obviously, “no significant technological advances” is a basically impossible scenario. I just mean it as a baseline. If you’re able to handle techno-stagnation in all domains you’re able to handle any permutation of stagnating domains.
I think the distinction is quite important. People frequently centralize systems to make them more robust. To big to fail banks are more robust than smaller banks.
On the other hand they don’t provide a resilience. If one breaks down your screwed.
Italy political system isn’t as robust as the system of Saudi Arabia but probably more resilient.
There are often cases where systems get more robust if you reduce diversity but that also reduces resilience.
You don’t. If technology A posits risk X and you need technology B to prevent risk X you are screwed in a world with A and not B but okay in a world without A and B.
When doing future planning it’s better to take a bunch of different scenarios of how the future could look like and see what your proposals do in each of those than to take the status quo as a scenario.
Everything can be broken. It’s a misleading approach to think of robust systems as breakable and resilient systems as not breakable.
Both kinds of systems will break with sufficient damage. Ceteris paribus you can’t even say which one will break first. The difference is basically in how they deal with incoming force: the robust systems will ignore it and resilient systems will attempt to adjust to it. But without looking at specific circumstances you can’t tell beforehand which kind will be able to survive longer or under more severe stress.
There is also the related concept of graceful degradation, by the way.
I think that model works quite well for a lot of practical intervention where people do things to increase robustness that cost resilience.
But you are right that not every robust system will break earlier than every resilient one.
That’s not at all clear, i.e., there isn’t a general optimization process that optimizes culture for what’s needed. There’s memetic evolution, but that has the usual problems. In particular states and even entire civilizations have collapsed in the past.
That depends on the particular cultural change.
In a case like using genetic engineering to produce superior humans there are pressures. If you have a few people doing it and they get benefits there cultural pressure for other people to also want the benefits.