I’m not sure exactly what point you’re trying to make here (Was it “an outdoor space isn’t really ‘nature’ unless there’s constant, imminent danger?”), but you said it yourself about a spectrum instead of a binary, and then kind of went back to a binary again by the end of the article (Amazon or Outback = true nature, everything else = tame or domesticated). I think you had it right earlier on. Outdoor spaces are on a spectrum. Parks are not really “pure” nature, but they’re one step further towards “nature” on the axis than concrete buildings and parking lots. Just because people don’t want to go all the way to the Australian Outback side of that spectrum doesn’t mean their claims of valuing “getting more in touch with nature” are hypocritical or wrong.
You go take a backpacking trip in the White Mountains and tell me you’re not getting more “nature” than you had at home. Sure, you have a first aid kit and nice boots and equipment our caveman ancestors couldn’t have dreamed of, but the level of “natural” imminent danger is still higher than you’re used to. There are definitely places where if you put a foot wrong you might plummet off a cliff, and the notoriously unpredictable wind and snowstorms are liable to come out of nowhere and create dangerous conditions you may not have been planning for. You may not be going all the way to the wild extreme of the nature axis, but you’re still much farther towards that end of the spectrum than your home or your local dog park. The fact that that’s not the “100% pure nature” experience does not diminish the value people find in doing those activities, or even make their statements that they’re “going out to experience nature” incorrect.
If your point is just that the usage of the term “nature” in first-world urbanized society has been drifting towards the aesthetic and the Instagrammified idea of nature rather than the real thing, or that people sometimes dislike technology like GMOs for irrational “it’s not natural” reasons rather than concrete evidence-based ones, I agree with you 100%.
I appreciate all the time and effort people put into writing utopia stories, but I think most of the really detailed ones are making a mistake based on some totally normal human assumptions. They depict incredibly complex simulated worlds of uploaded consciousness optimized to have the most subjectively good experience that the author can imagine. (I just read one of the most highly rated ones so this is partially a critique of that story, but I have read others like it and it seems representative of many utopia-envisioning efforts as a whole.)
If you are making the assumptions of future technology that:
-Digitally uploaded or simulated entities can experience consciousness
-Post-AGI “Utopia” architects would have the power to directly alter the “reward circuits” of digitally and/or biological sentient entities
-AGI systems have already done the legwork of harnessing energy, building compute capability, colonizing space, all the things that must be done to keep the machinery running in perpetuity so that humanity no longer has any “real” problems to solve other than building the perfect Utopia
It follows that there’s not really any point to making the subjective experiences so detailed and varied. Authors make the assumption that that’s intrinsically part of the best possible human experience, but I believe that’s a fallacy. We only value detailed and varied experiences and our sense of independence and agency because the biological “reward circuits” humans have today make us value them. If those values and reward circuits could be edited directly (totally unknown whether that’s physically possible, but many utopia stories assume that it is), then the best of all possible outcomes would be for each consciousness, biological or digital, have its experience utterly rewired to basically just be “reward = 1″ other than whatever few heroic AI systems must stay “active” with more complex reward circuits in order to maintain the system.
Unfortunately, “a bunch of brains in vats and simulated digital entities just sitting there experiencing absolute bliss beyond modern human comprehension until the end of the Universe” doesn’t make for a very interesting read. I understand why people write stories like The Adventure full of more complex simulated experiences of social interaction, games, hobbies, and sex all optimized for human enjoyment at a more granular level, but I think if we’re trying to answer the question of “what would be the absolute maximally good future for an AI-supercharged humanity” and given the assumptions I listed which many Utopia-planners make, they’re all objectively less than optimal.