I’ve been thinking about this a lot lately, so it was interesting to stumble across this.
Science seems to have built itself a huge reductionism bias, which honestly has served it very well at solving many many problems, it’s a fantastic and useful tool.
The problem is that (1) this has left huge gaps and (2) the sciences in general are blind to their own reductionist biasses (3) for many people reductionist science is seen as the only viable tool.
When you combine these compounding effects, this leaves a huge opportunity for independent researchers like us to address the gaping holes that are inevitably left all over the sciences.
Yes, writing should be optimal but also, the algorithm is just not very good (mostly).
Result: The internet has utterly failed at educating us (if you rely on it’s algos).
Which is sad, it’s such an easily solvable problem if they had the right skillsets doing the algos, with the right motivations, things could be so much better. Just another type of enshittification I guess, except this time the ultimate driver seems to be that organisations struggle with complex soft problems with medium term horizons and complex measurement, even when the impact is potentially business-destroying …