There’s an old song that says “language is a virus”—the meaning of which has changed for me over time and the phrase itself offers multiple interpretations.
Might I suggest that your values are what help define how you prioritize your alternatives and efforts within that ends-means framework.
My bad. I thought you were saying the term itself was not something you were familiar with.
I agree that it is difficult to understand in what settings status would fit the “X-sum” structure. My general thinking is perhaps it is more of the mindset for the person in the situation (in this case, the author) than some external objective metric outside observers would all be able to confirm.
That said, I took the zero-sum as an assumption “for the sake of argument” type rhetoric. I was interested in the bits about heuristics though it seems the main focus is really about how to deal with workplace relationship, in the context of status, which doesn’t greatly interest me or shed much light on the value of heuristics as rule and why they may be more valuable than attempts as some rational calculus in making one’s decisions in certain aspects of life.
My understanding of zero-sum is: assume a pie of a fixed size that will be eaten, entirely, by several people. The size of any given person’s slice and only be made larger but making at least one of the other slices smaller.
Positive-sum would be settings where the interactions of the eaters could increase the size of the pie—or perhaps number of pies to eat. Negative sum is just bad all around—stay away ;-)
While this response may well deserve to be a bit more tongue-in-cheek I wonder if there isn’t some possibility. What if the followers are actually all the AIs that have been replacing the people? If all the IoT predictions on number of connected devices are correct the potential “population” is going to be huge.
Shimiux is also on to something very important I think. We keep casting the picture are AI replacing existing man but forget that what will quickly take place is the merger of technology and man—including perhaps any number of subordinate AI elements in the enhanced human.
This seems to be a similar argument Lachmann makes in _The Structure of Production_. That argument is very similar, probably well thought of as an extension of Smith’s division of labor limited by the extent of the market applied to capital. Technology simply being something of a more general classification that applies to both labor and capital.
For me it’s about considering innovation through the lens of complementarity with the rest of the economy and particularly at the “edges”. If we consider the economy a tapestry the edges will be frayed, not well bound. That will be where innovation can take root and integrate into the broader picture by finding new relationships with the existing threads on the frayed edges. In some cases that results in brand new economic areas. Most of the time I suspect it results in advancements in how existing economic activity is conducted or needs are met leading to the Schumpeterian process of creative destruction.
I wonder if the Euclidean graph type approach really captures the full texture of the processes at work or not. Perhaps it hides as much in the shadows as it sheds light on.
Over at MR Cowan was just talking about how the great Venture Capitalists were all generalist, not specialist. This post seems to be in the same vein. I tend to be very sympathetic to the idea that humans will do best if they are not overly specialized. Might be something from my dad—he always told me I should get a trade and a profession and that way I will always be able for find a good paying job. That is a type of generalist.
I also agree that it’s not merely being something of a jack of many trades but to have those skills that are complementary with the others, not substitutes ( the anti-correlated relationship romeostevensit mentions).
Clearly it’s an example of the whole being greater than the sum of the part idea.
(Confession—I’m tired and my eyes hurt so to say I’ve done more than skim a few passages would be a gross overstatement)
Very tangential questions here. In these types of games does anyone use the concept of network effects in terms of understanding any of the behaviors and results?
Interesting for me.
I started thinking about the whole seeing the blind spots (after reading this) and the idea of finding black holes. We (well, *they* but I’ll include myself for fun) look at the distortions in the surrounding area to infer rather than directly observe. But in the case of personal blind spots I think we ultimately have the ability to closely examine them once we are able to identify them and have the strength & discipline to confront them.
This thinking made me wonder if a more important view than the shift from “reasonable/plausible” to “true” might be more that of “what am I protecting myself from?” Once we have that correctly identified one might then ask why one is doing so.
Just a side question here about the “a bit outdated (c. 2012)” note. Is that because you think the science/level of knowledge or some other technology related to such studies is changing that quickly?
Both the reference to additional sources and the review were great. Thanks to both you and ricraz.
I also thought the insight of “divination rituals are really pseudo-RNG ” very interesting. I think it would make an interesting research agenda for a number of grad students in various fields to explore. I would expand it further into the whole “role of faith” in society type of idea. Perhaps there is more to religions that we realize—regardless of one’s personal views on the existence of any god or gods.
Your link reminds me of an old econ article I read in school years ago (The Origins of Predictable Behavior, Ron Heiner) but seems to tease out an even more nuanced view of the challenges of knowledge and rules.
If so I wonder if that might not be traced back to immune systems—breastfeeding allows the baby to develop a strong immune system I think given the baby can borrow from mom rather than developing the response alone.
Follow up on my own comments. Skimming through the Myst’s and Facts chapter I wonder if such a migration would even be needed. One might only need to consider taking a few weeks off work and visiting friends/family or a vacation to a strategically uninteresting location in the USA that is also not directly downwind of a primary target.
A few thought that seem to be worth mentioning.
The survival guide is rather dated, 30+ years and close to 40 for what was not updated in the rewrite. I wonder if the weather patterns have not changed enough to make some of the argument moot.
Even if NZ is a good place to run, it may or may not be where you want to be. Clearly the people on the various sides pushing the buttons all know this. They will have their national bunkers and protected transportation (anyone see the recent story about the 4 USA plains that are designed for operating in such an environment?) . As the war works towards it’s conclusion one or all sides will start looking at where they want to live after the war complete. They will then clearly be targeting such locations—not with nuclear weapons (those starting to loose the war and thinking removing that option from the winning side might offer a better strategic negotiation stance might....) but clearly the power governing such places will likely change. What will your status be?
Last, in such a case why would NZ leave their boarders open. They may immediately put a block on such free entry. Even if they don’t just what is that countries carrying capacity for immigrants? How quickly would NZ devolve into a Hobbesian state of nature environment?
Of course the other big question here would be just how one would rationally evaluate the event of such an outcome and so achieve a less wrong outcome? Clearly you don’t want to uproot yourself and your family, throw away what is likely a pretty good job, throw away your house and possible other creature comforts just out of fear. That seems to be one of those quantity over quality approaches. I suppose some do think that way but for me I would prefer a shorter, higher quality life than a longer but low quality life.
“Bringing in the agency (Ai→Ui)→Ai of both players leads to cycle. This cycle does not make sense unless the agency arrows are lossy in some way, so as to not be able to create a contradiction. ”
I’m definitely missing something here—and may be thinking of things incorrectly here. Isn’t a contradiction inherent in a cycle behavior? I’m thinking about things like voting cycles events where preferences are multi-peaked resulting in shifting majorities.
Is the “lossy” point just saying in such a cycle we have rules about pairing the alternatives that are then voted for and once one alternative has lost then it’s out of the set for future votes?
Am I thinking of this the right way (even if putting it in a bit of a different context)?
That is very difficult to articulate for me. If we take the standard econ choice equilibrium definition of equating marginal utility per dollar, and then toss out the price element since we’re purely comparing the utility (ignoring the whole subjective versus other forms issue here) we don’t need to normalize on cost (I think).
That implies that the preference for completely different actions/choices I make are directly comparable. In other words it is a choice between differences in kind and not categorical differences.
However, when I really find myself in a position where I have a hard choice to make, it’s never a problem with some simply mental calculation such as above but feels entirely different. The challenge I face is that I’m not making that type of comparison but something more along the lines of choosing between two alternatives that lack a common basis for comparison.
I was thinking a bit about this in a different context a while back. If economic decision theory, at least from a consumer perspective, is all about indifference curves, is that really a decision theory or merely a rule following approach? The real decision arises in the setting where you are in a position of indifference between multiple alternative but economics cannot say anything about that—the answer there is flip a coin/random selection but is that really a rational though process for choice?
But, as I said, I’m not entirely sure I think like other people.
I did a search on the first born more intelligent query and go a hit to some article published in late 2016 or early 2017 -- news paper reported on the study in Feb 2017. What the hypothesis seemed to be was that parent interact with the first child differently than the later children and provide a more mentally stimulating environment for that child.
If so any bets on when the first law suit for compensation by the younger siblings will be filed for a great share of any inheritance? (semi-joking...)
Not sure if you were already thinking along these lines or not (not not entirely sure I think it’s how my brain works much less normal brains) but since you were borrowing from economics how are your preferences balanced internally? Looking at some constrained max reward? Decision-making a la marginal rewards? Something else?
Just asking about the birth order here. What is the implication of the finding—why is this seen? Any thoughts?
Like ricraz I was initially expecting a different post but like was was done.
However we still have the underlying problem that the replication test performed does not seem to do what it claims. https://www.sciencenews.org/blog/context/debates-whether-science-broken-dont-fit-tweets has some interesting comments I think. If I understood correctly the conclusion that a later test produced a different p-value says nothing about the underlying hypothesis—in other words the hypothesis is not tested, only the data. So unless this is all about running the same data sets....but that suggest other problems.
I suppose it depends on what we mean by safe AIs but in the back of my mind I think we’ll be safe from an AI deciding to take over the world and human kind (or simply kill us all off) if we manage to build in humor. That won’t be sufficient, perhaps not necessary either, but I think having it might make the goal of safe AIs easier to accomplish.