I don’t know if these comments will be helpful or even pertinent to the underlying effort related to posing and answering these types of problems. I do have a “why care” type of reaction to both the standard Newcomb’s Paradox/Problem and the above formulation. I think that is because I fail to see how either really relates to anything I have to deal with in my life so seem to be “solutions in search of a problem”. That could just be me though....
I do notice, for me at least, a subtle difference in the two settings. Newcomb seems to formulate a problem that is morally neutral. The psychologist seems to be setting up the incentives to be along the lines of can I lie well enough to get the $200 and my 10 minutes. Once you take the test, the envelope’s content is set and waiting or not has no force—and apparently no impact on the experiments results from the psychologist’s perspective as well.
Is the behavior one adopts as their solution to the problem more about personal ethics and honesty than mere payoffs?
I wonder if the idea of unit testing might fit with your thinking, and perhaps have some useful approaches as well as caveats.
Perhaps also either the idea of factions or special interests in political/social choice theories—but here fear those might be a bit too broad a “unit”.
APOD has a comment on the daily image section and (IIRC) a number of forums for posing questions. Why not ask there?
That reminds me of a story I read about a lambic brewer in Belgium who needed to replace an again brewery roof. They ended up building a roof over the existing, but dilapidated, one because of the wild yeast that was had grown in it over the decades. His concern (legitimate) was removing the old roof and replacing it would cause the flavor of the brew to change.
One can easily see how such an event 2 or 3 hundred years back might produce a more mystical explanation but based on a very empirical observation.
I think one should ask if life extension has any public good characteristics, such as is argued about things like general education for people, and if so to what extent that public good characteristic stands in relation to the private good characteristic? (could one create a ratio metric for that????)
This will all be interesting to see play out and I do agree that we can improve the election process.
I do wonder just how well it will actually improve the end outcome: better government and better representatives. From my perspective one of the main problems is that representatives are simply not representative of any real majority of the population or exposed to any real incentives to pursue what might be call common/general good over the special interests and partisan policies.
Just adding a view. Seems that one might connect the desire to eliminate the doubt and the problem of confirmation bias. I think it highly rational to accept that we do have limited knowledge and so all conclusions, outside some (narrow?) contexts, must be suspect at all times.
Pick anything “fact” your claim you know—for instance, that you know how to drive a car—and then start digging into just what you need to really know to make that claim 100% true. Do you actually know all that information or do you just get by and not cause/avoid accidents?
So little in our world is independent from everything else so when we start pulling one thread....
A long (long, long) time ago a friend of mine said: Being smart is not about how much information you know but knowing where to get the information you need.
I tend to agree with that general statement.
I also agree that we will remember the things we actually understand much better than those cases were we just memorized a set of rules or other “facts” that have little meaning to us—except when they become those meaningless things we have to use regularly ;-)
I think this has some important aspects related to how we think about our personal optimization or efficiencies regarding knowledge and information management—what we “know” (stored in our head) and what we have ready access to and can retrieve without much search effort that is in that “off-line” memory (books, notes, computers, more generalized things like operational procedures...)
I do understand this changes the focus of the OP and I am not rejecting that view—we do remember the things that we really understand and those things just seem “easy” and tent to “just make sense” without the need to (consciously) rederive the rule(s).
But I do wonder if it is really inefficient to study something only until you have a beginning understanding even if you know you don’t have an interest or need to fully understand as long as you learned enough to know where to apply that and created a good “index” to where to quickly locate that information should you need to actually use that “knowledge” in the future.
Regarding economic progress:
Solving the coordination problem at scale seems related to my musing (though not new as there is a large literature) about firms and particularly large corporation. Many big corporation seem more suitable to modeling as markets themselves rather than market participants. That seems like it will have significant implications for both standard economic modeling and policy analysis. Kind of goes back to Coase’s old article The Nature of the Firm.
Given the availability of technology, and how that technology should (and has) reduced costs, why are more developing countries still “developing”? How much of that might be driven more by culture than by cost, access to trade partners, investments, financing or a number of other standard economic explanations?
What we don’t understand looks like random noise: Perfect encryption should also look exactly like random noise. Is that perhaps why it seems the universe is so empty of other intelligent life. Clearly there are other explanations for why we might not be able to identify such signals (e.g., syntax, grammar and encoding so alien we are unable to see a pattern, perhaps signal pollution and interference due to all the electromagnetic sources in the univers) but how could we differentiate?
Can you give other conceptions of “impact” that people have proposed, and compare/contrast them with “How does this change my ability to get what I want?”
The next post will cover this.
(no way to double quote it seems...maybe nested BBCode?)
Anyhow, looking forward to that as I was struggling a bit with the claim cannot be a big deal if it doesn’t impact my getting what I want without being tautological.
A slightly different thought that might be easier to coordinate. Have the button hide all the comments of a specific user on LW—adds the variance that the thread is not merely bilateral. We could also add something that might obscure the actor, thought not entirely hide their action.
Additionally, we could have the button delete a selected subset of comments/posts allowing a scenario where one needs to decide if an all out attack was launched or something else is going on. That seems to be what Petrov faced. I would also add something that produced an almost identical signal even if no one pushed their button.
Though, now it’s becoming more like a war game on LW than simply noting a (at least I think) positive event in history. Still, we might make it a good experiment and see what can be learned.
Maybe I’m in a dark mindset here....
Seems like today, even with (due to?) the advances in weapons and other technology that MAD assumption may no longer be believed. I recall Putin claiming Russia would in fact survive an all out war with the USA. I wonder how much that view might change the way the game plays out.
On a tangent here, part of the concern is the proliferation of the technology. What would a Guarantee Assured Destruction (GAD) policy be for any country/group seeking such technology? Is that a better world than what we have now?
I’m not entirely sure we can ever have a correct choice in foresight.
With regard to Petrov, he did seem to make a good, and reasoned call: The US launching a first strike with 5 missiles just does not make much sense without some very serious assumptions that don’t seem to be merited.
I do like the observation that Petrov was being just as unilateralist as what is feared in this thread.
Do we want to lionize such behavior? Perhaps. You argument seems to lend itself to the lens of an AI problem—and Petrov’s behavior then a control on that AI.
Certainly good to hear. I almost accidentally pressed it earlier! No codes so good fail-safe for me.
I was not aware of this story and happy to hear it. While I think having the day of celebration and rememberance should be done, I wonder about the exercise with the button.
First, just not pushing the button and bring the page down for a day seems not to fit the problem. The button should be shutting down someone else’s site with the realization that they will have some knowledge of that coming and have a button that shuts your page down. Perhaps next year the game could include other sites, and particularly sites whose members do not really see eye-to-eye on things.
Second, it doesn’t really tell others much about avoiding such situations. Reading Eliezer’s post the critical insight for me seems to be that of remaining calm and taking the time available to think a bit rather than merely react and follow instructions of a mindless process. That Petrov realized that launching 5 missiles just made no sense, so came to the conclusion that there was a system error/false positive is critical here.
I wonder if this is merely putting pressure on your self to reach some goal rather than just an interest in learning on your own.
I might suggest reflecting on why you are interested in, or perhaps what you are interested in, learning outside your coursework.
As something of a side note, it is probably a safe statement to say you are already independently learning outside your coursework just by living your daily life but perhaps you’re not consciously aware you are.
With regard to distortions one need to look at supply (and possibly demand) elasticities. It is possible that a small tax could produce a larger welfare loss than that of a large tax.
It might also be good to look at where the tax is initially falling—have not thought this out yet but is there a multiplier effect potential here?
Another view might also be that of costs—why is the cost of governance any more distortionary than the presence of costs anywhere else in the input markets? Maybe the approach here should be to look at potential real economic profits in the input cost prices (which would include cost of government) and make those incremental costs the distortionary element.
This post is about seeing constraints in planning/agents/environments and how to wield those constraints effectively to achieve your goals.
There is a huge literature on this type of agenda setting. I think for the most part one person achieving their goals will depend a lot on how well others, with competing and possibly incompatible goal, recognize the situation and formulate their own strategies.
Agree with most of what is said. I would also point to educational alternatives like the Khan Academy.
Regarding ” If the systems are as a corrupt as you think they are, they should destroy themselves on their own in any case.” I am wondering if that is saying we will not see stable systems that are inherently corrupt (no stable equilibrium with corruption) or “that level” is not stable—but I didn’t see anything that suggest some excessively large level of corruption.
I think I would be more concerned about corrupt practices driving out possible innovations and perhaps limiting growth (but here not sure as I see China’s economy and polity as largely corrupt but they seem to be growing fine and are as stable as the USA or EU I would suggest)
Interesting article on Quanta. https://www.quantamagazine.org/new-hybrid-species-remix-old-genes-creatively-20190910/