The text seems pretty clear on both these questions.
But the real problem with the “aid effectiveness” craze is that it narrows our focus down to micro-interventions at a local level that yield results that can be observed in the short term. At first glance this approach might seem reasonable and even beguiling. But it tends to ignore the broader macroeconomic, political and institutional drivers of impoverishment and underdevelopment. Aid projects might yield satisfying micro-results, but they generally do little to change the systems that produce the problems in the first place. What we need instead is to tackle the real root causes of poverty, inequality and climate change.
...
In all these areas, there is still an enormous amount to be done. If we are concerned about effectiveness, then instead of assessing the short-term impacts of micro-projects, we should evaluate whole public policies. In this respect, there is a wealth of underused data provided by decades of household surveys by national statistical offices. Combined with satellite data, recently made public, they can now be used for detailed analysis, capable of providing clear information on the public policies that have been most successful. In the face of the sheer scale of the overlapping crises we face, we need systems-level thinking.
The problems with choosing interventions based on how well they are measured to do are similar to the problems faced by model-free reinforcement learning algorithms (such as: necessity to collect lots of high-quality data, the costs of exploration, local maxima that could be avoided with better models, Goodhart’s law, lack of human understanding of the underlying phenomena, problems with learning long-term dependencies, use of CDT or EDT as a decision theory), because the process of choosing interventions based only on how well they are measured to do is literally a model-free reinforcement learning algorithm.
One thing the article unfortunately fails to acknowledge is that observational data is often insufficient to infer causality, and RCTs can help here.
The text seems pretty clear on both these questions.
...
The problems with choosing interventions based on how well they are measured to do are similar to the problems faced by model-free reinforcement learning algorithms (such as: necessity to collect lots of high-quality data, the costs of exploration, local maxima that could be avoided with better models, Goodhart’s law, lack of human understanding of the underlying phenomena, problems with learning long-term dependencies, use of CDT or EDT as a decision theory), because the process of choosing interventions based only on how well they are measured to do is literally a model-free reinforcement learning algorithm.
One thing the article unfortunately fails to acknowledge is that observational data is often insufficient to infer causality, and RCTs can help here.