It is true that most pharma companies concentrate on indications that supply returns to offset the cost of development. The FDA does have a mechanism for Orphan Drug approval, for rare diseases, where the registration requirements are significantly lowered. According to this site 41 orphan drug approvals were made in 2023. Whether this mechanism is good enough allow the promototion of rare disease in the larger pharmaceutical industry is a good question. I wonder how many of these drugs, or their precursors, originated in academic labs,, and were then spun out to a start-up or sold on?
Ponder Stibbons
Two things that happen in the pharmaceutical industry today despite the FDA.
Many drug candidates (compounds with IND status sanctioned by the FDA ) are pushed into clinical investigation prematurely by venture capital funded biotech, that more established and careful pharma companies would stay away from. These have a high rate of failure in the clinic. This is not fraud, by the way, it is usually a combination of hubris, inexperience, and a response to the necessity of rapid returns.
Marketing wins over clinical efficacy, unless the difference is large. Tagamet was the first drug for stomach ulcers released in the late ’70s.It was rapidly overtaken by Zantac, in the ’80s, through aggressive marketing, despite minimal clinical benefit. Today there is a large industry of medical writers sponsored by the pharmaceutical industry, whose job it is to present and summarise the clinical findings on a new drug in the most favourable way possible without straying into actual falsehood.
The scientists working at the sharp end of drug discovery, who fervently believe that what they do benefits mankind (this is, I believe, a gratifyingly large proportion of them) generally respect the job the FDA do. This is despite the hoops they force us to go through. Without the FDA keeping us honest, the medicines market would be swimming with highly marketed but inadequately tested products with dubious medicinal value. Investors would be less choosy about following respected well thought-out science, when placing their money. True innovation would actually be stifled because true innovation in drug discovery only shows its value once you’ve done the hard (and expensive) yards to prove medical benefit over existing treatments. Honest and well enforced regulation forces us to do the hard yards and take no short cuts.
In 2023 55 new drugs were approved by the FDA, hardly a sign that innovation is slacking. Without regulation the figure might be ten times higher but clinicians would be left swimming in a morass of claims and counter claims without good guidance (currently generally provided by the FDA) of what treatments should be applied in which situation.
Poorly regulated health orientated companies selling products that have little or no value? Seems unlikely.. Oh wait, what about Theranos?
A thought provoking post. Regarding peer reviewed science, I can offer the perspective that anonymous peer review is quite often not nice at all. But, having said that, unless a paper is extremely poor, adversarial reviews are rarely needed. A good critical constructive review can point out severe problems without raising the hackles of the author(s) unnecessarily and is more likely to get them dealt with properly than an overly adversarial review. This works so long as the process is private, the reviewer is truly anonymous, and the reviewer has the power to prevent bad work being published, even if from a respected figure in the field. Of these three criteria it is the last that I’d have most doubts about, even In well edited journals.
I’m not claiming this view to be particularly well informed, but it seems a reasonable hypothesis that the industrial revolution required the development, dispersement and application of new methods of applied mathematics. For this to happen there needed to be an easy-to-use number system with a zero and a decimal point. Use of calculus would seem to be an almost essential mathematical aid as well. Last but not least there needed to be a sizeable collaborative, communicative and practically minded scientific community who could discuss, criticise and disseminate applied mathematical ideas and apply them in physical experiments. All these three items were extant in Britain in the late 17th century, the latter being exemplified by the Royal Society. These, combined with the geologically bestowed gifts of coal and iron ore, would set Britain up to be in the best position to initiate the Industrial Revolution.
Now, can a proper historian of science critique this and show how this view is incorrect?
Anecdotal, but in the UK, in 1986, as a just graduated PhD I bought a 3 bedroom house for less than 4 times my salary. At present a similar house in a similar location, will cost roughly 10 times a starting PhD salary. House ownership for most young people in the UK is becoming a distant and ever delayed dream.
“Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can’t flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.”
But directed evolution of a polymeric macromolecule (E.g. repurposing an existing enzyme to process a new substrate) is so much easier practically speaking than designing and making a bespoke macromolecule to do the same job. Synthesis and testing of many evolutionary candidates is quick and easy, so many design/make/test cycles can be run quickly. This is what is happening at the forefront of the artificial enzyme field.
So my personal viewpoint (and I could be proved wrong) is that Bing hasn’t the capability to suffer in any meaningful way, but is capable (though not necessarily sentiently capable) of manipulating us into thinking it is suffering.
Whilst it may be that Bing cannot suffer in the human sense, it doesn’t seem obvious to me that more advanced AI’s, that are still no more than neural nets, cannot suffer in a way analogous to humans. No matter what the physiological cause of human suffering, it surely has to translate into a pattern of nerve impulses around an architecture of neurons that has most likely been purposed to give rise to the unpleasant sensation of suffering. That architecture of neurons presumably arose for good evolutionary reasons. The point is that there is no reason an analogous architecture could not be created within an AI, and could then cause suffering similar to human suffering when presented with an appropriate stimulus. The open question is whether such an architecture could possibly arise incidentally, or whether it has to be hardwired in by design. We don’t know enough to answer that but my money is on the latter.
I think this is a very good point.. Evolution has given humans the brain plasticity to create brain connectivity so that a predisposition for morality can be turned into a fully fledged sense of morality. There is, for sure, likely some basic structure in the brain that predisposes us to develop morality but I’d be of the view the crucial basic genes that control this structure are, firstly present in primates, and at least, other mammals, and, secondly, the mutations in these genes required to generate the morally inclined human brain, are far fewer than need be represented by 7.5 MB of information.
One thing both the genome and evolution have taught us is that huge complexity of function and purpose can be generated by a relatively small amount of seed information
A personal anecdote. Many, many moons ago I started my research career at a large multinational organisation in a profitable steady business. I enjoyed the job, the perks were nice, I did the work and did well in the system. Some years later my group were asked to take a training course run by an external organisation. We were set a scenario “Imagine your company has only money for 6 months? What are you going to do about It?” We, cossetted in our big company mindset, thought the question hilarious and ludicrous.
Fast forward a number of years, the company closed our site down and I went off and joined a start-up, Very soon we all found ourselves in exactly the scenario depicted in the training exercise. We managed to survive. I’ve worked in small/smallish organisations ever since. There have been ups and downs but on the whole I wouldn’t have changed anything.
This is perhaps slightly tangential, though likely consequential to the Middle Manager Hell the OP describes. The big company environment made it easy for us to be complacent and comfortable, and hard for us to follow up the high risk high/profit ideas that might have made a big difference to the bottom line.
This was a long while ago and since then at least some big companies have tried various initiatives to change this kind of mindset. So perhaps things have changed in some large multinationals. Can anyone else comment?
I‘m afraid you’ll have to do more to convince me of the argument that Lavoisierian theory held up the development of chemistry for decades by denying the role of energy. Can you provide some evidence? Until the discovery of the atomic model, chemistry by necessity had to be an empirical science where practitioners discovered phenomena and linked them together and drew parallels, and progressed in that manner. Great progress was made without a deep underlying theory of how chemistry worked. It was well known that some reactions gave out heat, and some required heat to proceed and not much more was needed as regards the role of “energy”. Alloys and dyes and such were all first discovered without much deep understanding of chemical reaction theory.
Once quantum theory came along we understood how chemistry works and a lot of observations and linkages made sense. But for a long time quantum theory didn’t help as much as you might expect in pushing chemistry in new directions because the equations were too hard to get any real numbers out. So, much of chemical research carried on quite happily following well tried and tested paths of empirical research (and still does to quite a large extent). It was only really with the advent of computers that we started to make heavy use of calculation to help drive research.
You make the very good point that the Phlogistonists didn’t deserve to be pilloried, because they had a theory that was self consistent enough to model the real world as we know it now. But until electrons were actually discovered, it is hard to see how any Phlogistonist could seriously compete with the Lavoisierian point of view. It could scarcely be otherwise.
Interesting example. I think the movie theatre in practice always has value and counts towards wealth, because even if you don’t have time/inclination to use it, you could in principle sell the house to an appropriate movie buff, for more than you could if you didn’t have the theatre, and use the extra money to do more of what you want to do. So the “potential“ argument still works. This argument could also be applied to a heck of a lot of other things we might own but have little use for. On that basis, EBay is a great wealth generator!
I see “wealth” not as a collection of desirable things but as a potential or a power. An individual who has some wealth has the potential or power to undertake certain things they would like to do, over and above basic survival. An individual with greater wealth has greater choice of the things they can choose to do. Such things might include eating Michelin 3 star food, or driving a Ferrari along the coast. They also might include a simple afternoon walk in the woods. In the latter case the “wealth“ required to undertake this activity comprises having the leisure time available for the activity, the personal good health that allows for enjoyable walking, clothing of suitable quality for the activity to be pleasurable, and a means of fairly effortlessly getting to the woods in the first place.
It follows that, whilst “wealth” might have a roughly linear relationship to “money”, the amount of surplus money one has to attain a certain “wealth” will be different for everybody, principally because we all different ideas of how we might use our wealth, some of which will cost more than others. Additionally, some wealth doesn’t necessarily cost any money to create or to acquire. Consider a coder who makes a compelling game and puts it out as open source. The coder has created “wealth” because they have created the potential for others to undertake something they would like to do, namely, play the game. The coder has used their own time and little else. If the creation of the game was an enjoyable activity for the coder then the wealth has been created at zero cost.
Yes, the lab protocol it actually suggests would likely lead to an explosion and injury to the operator. Mixing sodium metal and a reagent and adding heat does not usually end well unless/even done under an inert atmosphere (nitrogen or argon).. Also there is no mention of a “work-up step,“ which here would usually involves careful quenching with ethanol necessary to remove residual reactive sodium, and then shaking with an aqueous base.
It is rarely wrong to follow what you are passionate about. Go for it. But do think hard before discarding your placement in industry. Obtaining a diverse set of career relevant experiences early on is valuable. Industrial placements look good on a resumé as well.
I did wonder whether one reason it might be hard to commercialise orexins was because, being peptides, delivery would be difficult.
But, apparently not, nasal spray works just fine …
So the domain I’m most familiar with is early stage drug discovery In industry. This requires multidisciplinary teams of chemists, computational chemists, biochemists, biologists, crystallographers etc. Chemists tend to be associated with one project at a time and I don’t perceive part-time working to be beneficial there. However the other disciplines are often associated with multiple projects. So there’s a natural way to halve (say) the workload without reducing efficiency. The part-time scientist should be highly experienced, committed to what they are doing, and have few management responsibilities. If that holds then my experience is they are at least as productive than a full time worker, hour for hour.
Very interesting points. But some of them are surely specific to the size, workforce make-up and activities of your organisation. I’d like to put an alternative view on point 14, at least as it applies to an organisation with longer timelines and a more autonomous working regime (so less opportunity for blocking). My experience is that part-time workers can be more productive hour for hour than full-time workers, in the right work domain. A fully committed part-time worker has a ready-made excuse to avoid those meetings that don’t make them productive. They will use their slack time to be thinking of their work, coming up with ideas at leisure, and creating an effective plan for their next work period. They can be flexible in their work hours so as to attend the important meetings and one-to-ones and to avoid blocking anyone (Especially if they also WFH some of the time- so can dip into work for an hour in a day they normally don’t work). They can use (E.g computational) resources more effectively so that they are rarely waiting for lengthy production runs (or calculations, say) to finish. Lastly, they are often less stressed through not being overworked (and hence more effective).
Clearly this will not be true for all work domains. Nevertheless it has been recently reported in the UK press that an international experiment to test a 4 day (32 hr) work week at 100% salary has resulted in no loss of productivity for many of the companies involved, and many of them are continuing with the scheme.
My understanding is that off-label often means that the potential patient is not within the bounds of the clique of patients included in the approved clinical trials. We don’t usually perform clinical trials on children or pregnant women, for instance. Alternatively, strong scientific evidence is found that a drug works on a related disease to the actual target. It may well make sense to use drugs off label where the clinician can be comfortable that the benefits out-way the possible harms. In other cases, of course, it would be extremely poor medicine. In any case, having statistically significant and validated evidence that a drug actual does something useful, is non-negotiable IMO.