Based on the journalistic style alone? The important indicators point at it being very significant: PNAS published, Blue Brain Project (which is a serious effort I’ve semi-followed for a long time), the direct quote from the PI (“This is a major breakthrough …”)
That is the place we’d expect such breakthroughs to originate from, it’s already been peer-reviewed, why so skeptical?
Personally, I would assume that it would be quite difficult for a random distribution of neurons to form the exact same network of synapses 85% of the time. Two random graphs with the same number of vertices have a very low probability of being isomorphic, but the spanning trees of those random graphs would have a trivial isomorphism and I assume neural networks formed by synapses randomly bumping into each other are more like the union of a few random spanning trees (lots of local connections, fewer long ones) than fully random graphs. Still, it’s probably not 85% or 95% likely for there to be an isomorphism. However, if the shape of the neurons is what determines the properties of the synapses then it may be that how the synapses grow isn’t as important as how the types of neurons are initially distributed. I haven’t read the paper either, but if the summary is correct they distributed the different types of neurons “randomly”, which of course doesn’t say if there was a complex probability density over the 3D space or if it was, e.g., uniform probability for each neural shape.
Based on the journalistic style alone? The important indicators point at it being very significant: PNAS published, Blue Brain Project (which is a serious effort I’ve semi-followed for a long time), the direct quote from the PI (“This is a major breakthrough …”)
That is the place we’d expect such breakthroughs to originate from, it’s already been peer-reviewed, why so skeptical?
Personally, I would assume that it would be quite difficult for a random distribution of neurons to form the exact same network of synapses 85% of the time. Two random graphs with the same number of vertices have a very low probability of being isomorphic, but the spanning trees of those random graphs would have a trivial isomorphism and I assume neural networks formed by synapses randomly bumping into each other are more like the union of a few random spanning trees (lots of local connections, fewer long ones) than fully random graphs. Still, it’s probably not 85% or 95% likely for there to be an isomorphism. However, if the shape of the neurons is what determines the properties of the synapses then it may be that how the synapses grow isn’t as important as how the types of neurons are initially distributed. I haven’t read the paper either, but if the summary is correct they distributed the different types of neurons “randomly”, which of course doesn’t say if there was a complex probability density over the 3D space or if it was, e.g., uniform probability for each neural shape.