Just a passing though here. Is probability really the correct term? I wonder if what we do in these types of cases is more an assessment of our confidence in our ability to extrapolate from past experience into new, and often completely different, situations.
If so that is really not a probability about the event we’re thinking about—though perhaps is could be seen as one about our ability to make “wild” guesses (and yes, that is hyperbole) about stuff we don’t really know anything about. Event there I’m not sure probability is the correct term.
With regard to the supernatural things, that tends to be something of a hot button to a lot of people I think. Perhaps a better casting would be things we have some faith in—which tend to be things we must infer rather than have any real evidence providing some proof. I think these change over time—we’ve had faith in a number of theories that have been later proven—electrons for example or other sub atomic particles.
But then what about dark matter and energy? The models seem to say we need that but as yet we cannot find it. So we have faith in the model and look to prove that faith was justified by finding the dark stuff. But one might as why we have that faith rather than being skeptical of the model, even while acknowledging it has proven of value and helped expand knowledge. I think we have better discussion about faith in this context (perhaps) that if we get into religion and supernatural subjects (though arguably we should treat them the same as the faith we have in other models to my view).
First to be clear I have not closely read all the series or even this one completely—just feeling sick today so not focused. However, I did have a thought I wanted to get out. May have been well addressed already.
It seems that we are perhaps missing an element here. Is it possible that even if one is working, from a entire corporate structure setting, in a moral maze that various levels and don’t really impose the same problems. Thinking of this as a setting where we see the whole as one large pond. However, what if rather than one large pond what we have is actually a collection or connected smaller ponds and the maze really only applies in some and at the collection of ponds level.
Is there something of a fallacy of composition error potential here? The whole is a moral maze but many of the ponds it is comprised of lack that character?
If so then it may well be possible to escape the maze without having to quit the job.
states pursuing power in zero-sum power races ultimately created positive sum economic spillovers from peace and innovation.
Which seems a lot like one might characterized basic research in many ways—it seems a bit wasteful and doesn’t really accomplish a lot that is directly useful to anyone on most practical levels initially. However ultimately it tends to plant a lot of seeds or open a lot of new development branches that do.
So are arms races, particularly those that don’t end in an armed conflict, something we can view as just another form of basic research? Or is the arms race side of this just one of the branches that stemmed from the basic research and maybe we shouldn’t give the arms race or the zero-sum power game much credit for the spin offs?
I’m also not quite sure what to make of “if the ‘good guys’ are going to win (and remain good guys).” That seems to be too subjective to be much help to me.
I wonder if some form of garnishment agreement would be better. Still own on you student loan then the job applicant would be required to notify the employer of the outstanding debt and the employer agree to making the garnishment withholding.
We might want to set a some moratorium on interests—put that on hold at some point (without any hidden accruals) and then once the person is employed give them a year or two before the interest kicks in again.
Thanks. Interesting, though not too surprising in some ways.
This reflect both a couple of comments I’ve made regarding rules versus analyzing/optimizing as well as very unclear thought I’ve had bouncing around in my head for a little while now. The though is about the tendency of discussion here being very formal and model oriented, as if we really can optimize in our daily lives like occurs in the theoretical world of our minds and equations and probabilities. I’m not saying that approach is something to avoid, only that it does tent to set the focus on a view of precision that probably does not obtain for most in a majority of situations. (The recent post about acting on intuitions rather than the calculations, then tracking those over a year or two to see what you learn fits here too.)
This rule approach clearly takes that every case decision analysis choice away, if one buys into following the rule. However, we all know that in some cases you can get away with violating the rule (and perhaps even should violate the rule—or revise as has been suggested in other threads/comments as I recall). At the same time if can be difficult, as you mention, to know just which cases are good for violating the general rule. I would add that it might not be enough to keep track of when we violate the rule and what the results were—the probably hangs on just how representative the individual cases are and how well we can tell the differences in any truly complex case.
This seems all very paradoxical to me, and generally I can deal with paradox and a contradictory world (or perhaps just in my own behavior when I try viewing it from an outside perspective). Still, I find myself wanting to wrap my own thinking around the situation be bit better as I read various approaches or thoughts by people here.
In this analysis is there any assumption about information states? Is the idea that the forecasts are all based on public information everyone has available to them? Or can that explain part of the different performance and then we need to look at a subset with perhaps better access to the information and see how they perform against one another—or various types of informational asymmetries or institutional factors related to the information.
You don’t seem to recognize an attempt at humor. I take it you never read A Hitchhikers Guide to the Galaxy.
Frailty seems a questionable cause in this context. Am in interpreting incorrectly perhaps?
I would think frailty while young might be a symptom of something that leads to death but how do we go from “sturdy” and so healthy and living well (in a functional sense) to old and frail and more likely to die?
The other two, seem more like lottery type cases, yes we all have a probability of contracting some infection or virus that our immune systems just cannot deal with so we die. We have a probability of cancer destroying critical systems. But that doesn’t quite explain the whole aging story to me—why the slow path to what we see physically rather than a sudden break? Or is this inference about how we should observe things missing something you perhaps have bundled into the three causes you mention above I perhaps I should understand why (if I were more knowledgeable on this area)?
Here is one link: https://www.sciencedaily.com/releases/2019/10/191031174650.htm . I was not able to find the one I was actually reading earlier (and apparently my poor sleep last night was not sufficient and I cannot remember how I found it....) but the link here seems to be referencing the study I was reading about.
BTW, when I mentioned “external” I was not thinking external to the organism (e.g., me) but rather external to the cells (or at least many of them) but within the confines of our body or organ.
[Just found it with a different seach. https://www.sciencenews.org/article/sleep-may-trigger-rhythmic-power-washing-brain ]
I wonder if perhaps something more environmental might also be playing a part. The protein toxins that are associated with Alzheimer’s seem to build over time and the effectiveness some of the processes that work to clean them up may be negatively impacted they its presence. Seems the the sleep cycle (non-REM) results in a reverse flow type flushing of the brain that help clear this out. But the build up itself seem related to not getting that sleep needed.
So what about internal and external to the cells themselves? Could some elements or combination of things build up that we’re just not looking at yet—have not see it as connected to any of the processes?
This might be tenable if the foundations of physics (general relativity and quantum theory) were plausibly true. But general relativity and quantum theory contradict each other. They cannot both be correct. Therefore at least half of physics is wrong.
I also don’t think the logic of the argument quite holds.
My take is we can interpret the situation better via a clock metaphor. Specifically the old staying about running and stopped watches. The running one is almost always wrong while the stopped (broken/not running) one is correct twice a day (assuming the 12 hour clock).
I don’t disagree with the general sentiment here, but think a better way to approach it might be recognizing the essentially all of physics is incomplete. One of the ways this manifests in the science is that disagreement between the quantum and macro theories.
Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
I just wonder if it might be worth distinguishing between personal and social modes of this behavior. Not sure here though. Initially my thought on your first two examples were they are not really normalized deviations but simply poor discipline—and somewhat still view them as that. The point about allowing some slack, however, is important to keep in mind here too. (Plus there are other aspects here—like is the habit to be formed really something one wants or just thinks they should want it because its some general consensus or it works for other people).
Much of your view here does seem to apply to the environment in which I work and always find myself oscillating between thinking I need to try to help enforce the stated rule/goal/behavior and realizing it is not to be taken at face value and interpreted in a slightly different way (that would be a deviation from the ostensible policy. I find it very difficult though, it creates a lot of frustration for me and a sense of cognitive dissonance for me mentally.
Thanks and lots to think about there and it has been helpful, and I think as I digest and nibble more it will provide greater understanding.
For example, I think I’d personally find it quite plausible to take the results of (possibly normalised) MEC/MEC-E quite seriously, but to still think there’s a substantial chance of unknown unknowns, making me want to combine those results with something like a “Try to avoid extremely counter-common-sense actions or extremely irreversible consequences” model.
That resonates well for me. Particularly that element of rule versus maximizing/optimized approach. Not quite sure how or where that fits—and suspect this is a life-time effort to get close to fully working though and opportunities to subdivide areas (which then required all those “how do we classify...” aspects) effectively might be good.
With regards to rules, I think there is also something of an uncertainty reducing role in that rules will increase predictability of external actions. This is not a well thought out idea for me but seems correct, at least in a number of possible situations.
Look forward to reading more.
For my own benefit could you clarify your definition of uncertainty here?
I’ve always used a distinction between risk and uncertainty. Risk is a setting where one reasonably can assign/know the probability of possible outcomes.
As of 1/1/30, customers will not make purchases by giving each merchant full access to a non-transaction-specific numeric string (i.e. credit cards as they are today): 70%
That certainly seems a very reasonable prediction, and perhaps too conservative. In many ways one might say that current chip based card transactions (which would also include all the mobile payments like Apple/Samsung/Google pay) have already departed that non-transaction-specific model. Similarly, for online purchases that use token technologies these are often linked to the specific merchant.
However, there might be two ways to interpret that predictions. 1) the payment mechanisms used for non-cash transactions will move towards transaction specific identifiers and cash will not be used or used significantly less than today or 2) we might see some form of transaction specific “money” (block-chain currencies seem to fit but I don’t think they are the future) and more transactions are conducted as “cash” rather than using these payment card mechanisms.
15? More humility mostly but should probably have limited that to certain fields, such as cosmology, rather than painting with a really large brush.
As for the assessed probabilities, I can only hope you are correct.
As for the burdensome details, I’m not sure that applies (but thanks for the link and I will read it more fully and reconsider). I have reformatted the item—whether or not that changes it being a burdensome details error....
Are these buckets based on the incentivizing of humans by either punishment or reward?
Perhaps, though not intentionally.
Then figure out how to build that. Build an AI that just intrinsically wants to help humanity, not one constantly trying and failing to escape your chains or grasp your prizes.
I would put “intrinsically wants to help” in bucket two (or perhaps say that is bucket 2) while “chains” would be bucket 1. But those are very general concepts and will rely on various mechanisms or implementations.
Your comment seems to suggest that bucket 1 is useless and full of holes and no one is pursuing that path. Is that the case?
Another possible implication might be incentives towards defining organizational mission in ways that effectively make the problematic truths “out of mission” so purely private views. Then the truth sayer will only be speaking as an individual—which could perhaps have them moved out of membership if it the actions were to disrupt the organizational mission. Or simply remove them from any protections some group membership might have otherwise provided.