It’s still early to tell, as the specific characteristics of a photonic or optoelectronic neural network are still formulating in the developing literature.
For example, in my favorite work of the year so far, the researchers found they could use sound waves to reconfigure an optical neural network as the sound waves effectively preserved a memory of previous photon states as they propagated: https://www.nature.com/articles/s41467-024-47053-6
In particular, this approach is a big step forward for bidirectional ONN, which addresses what I think is the biggest current flaw in modern transformers—their unidirectionality. I discussed this more in a collection of thoughts on directionality impact on data here: https://www.lesswrong.com/posts/bmsmiYhTm7QJHa2oF/looking-beyond-everett-in-multiversal-views-of-llms
If you have bidirectionality where previously you didn’t, it’s not a reach to expect that the way in which data might encode in the network, as well as how the vector space is represented, might not be the same. And thus, that mechanistic interpretability gains may get a bit of a reset.
And this is just one of many possible ways it may change by the time the tech finalizes. The field of photonics, particularly for neural networks, is really coming along nicely. There may yet be future advances (I think this is very likely given the pace to date), and advantages the medium offers that electronics haven’t.
It’s hard to predict exactly what’s going to happen when two different fields which have each had unexpected and significant gains over the past 5 years collide, but it’s generally safe to say that it will at very least result in other unexpected things too.
The meta-analysis is probably Simpson’s paradox in play at very least for the pain category, especially given the noted variability.
Some of the more recent research into Placebo (Harvard has a very cool group studying it) has been the importance of ritual vs simply deception. In their work, even when it was known to be a placebo, as long as delivered in a ritualized way, there was an effect.
So when someone takes a collection of hundreds of studies where the specific conditions might vary, and then just adds them all together looking for an effect even though they note that there’s a broad spectrum of efficacy across the studies, it might not be the best basis to extrapolate from.
For example, given the following protocols, do you think they might have different efficacy for pain reduction, or that the results should be the same?
Send patients home with sugar pills to take as needed for pain management
Have a nurse come in to the room with the pills in a little cup to be taken
Have a nurse give an injection
Which of these protocols would be easier and more cost effective to include as the ‘placebo’?
If we grouped studies of placebo for pain by the intensiveness of the ritualized component vs if we grouped them all together into one aggregate and looked at the averages, might we see different results?
I’d be wary of reading too deeply into the meta-analysis you point to, and would recommend looking into the open-label placebo research from PiPS, all of which IIRC postdates the meta-analysis.
Especially for pain, where we even know that giving someone an opiate blocker prevents the pain reduction placebo effect (Levine et al (1978)), the idea that “it doesn’t exist” because of a single very broad analysis seems potentially gravely mistaken.