How Beliefs Change What We See in Starlight

I was ad­vised that the rea­son the ar­ti­cles I’ve posted here were not get­ting a good re­cep­tion was that they were too long and they dis­cussed episte­molog­i­cal con­cepts in physics in ways that seemed un­fa­mil­iar to this au­di­ence.

To back up and try to catch the peo­ple for whom this was the case, I’ve put to­gether this brief in­tro­duc­tion.


When you be­lieve in a the­ory that pre­dicts the ex­is­tence of gi­gan­tic, grav­i­ta­tional black holes that eat light, will you see ev­i­dence of these black holes all over the night sky? Will you con­struct pretty, artis­tic rep­re­sen­ta­tions of the things in which you be­lieve.

When you be­lieve in a the­ory that pre­dicts black hole col­li­sions that re­lease waves which will be de­tected on Earth as tiny changes in very sen­si­tive mea­sure­ment de­vices, will you see ev­i­dence of these col­li­sions ev­ery­where you look? Will you con­struct simu­la­tions of what you think these col­li­sions would look like?

When the data from a sin­gle mea­sure­ment de­vice is so noisy that it shows you noth­ing, is it pos­si­ble to com­bine data from mul­ti­ple mea­sure­ment de­vices to see some­thing, or is there the risk that by pick­ing and choos­ing which mea­sure­ments to com­bine and look at, you might see some­thing that isn’t re­ally there? For ex­am­ple, if you took pho­tographs ev­ery mid­night of a dark, creepy hal­lway and none of them showed any­thing un­usual, but when you su­per­im­posed all of the images to­gether, all of the dust par­ti­cles ac­cu­mu­lated to form the shape of a ghost, can you be­lieve that mea­sure­ment?

There is always the risk that you might ac­ci­den­tally mea­sure some­thing other than what you in­tended, for ex­am­ple, in­stead of mea­sur­ing a black hole shadow in the starry night sky, you might end up mea­sur­ing light leak­ing around your earthly mea­sure­ment de­vice.

The pur­pose of the sci­en­tific method is to avoid fool­ing our­selves about what causes what we see. How can we avoid fool­ing our­selves about what we see in the stars? There are, af­ter all, many ideas that are con­sis­tent with our the­o­ries of na­ture. Dragons and uni­corns are perfectly con­sis­tent with the the­ory of evolu­tion, yet they do not ex­ist.

In a lab­o­ra­tory ex­per­i­ment, you have a closed sys­tem in which chang­ing one vari­able should pro­duce a pre­dictable effect in an­other, but in as­tron­omy, we can’t do these sorts of con­trol­led ex­per­i­ments. At best, we can de­scribe what we see. A cos­mol­o­gist might see the re­sults of a big bang ex­plo­sion, a crys­tal­lo­g­ra­pher might see a diffrac­tion pat­tern, and a ma­te­ri­als sci­en­tist might see a pat­tern of lo­cal­ized light in an in­ho­mo­ge­neous medium. When it comes to as­tron­omy, there can never be one, defini­tively true de­scrip­tion be­cause we can never cre­ate a con­trol­led ex­per­i­ment with the stars.

Some peo­ple build ex­per­i­ments to de­tect small changes in grav­ity and they at­tribute these changes to in­visi­ble things that are hap­pen­ing in the stars, but they can’t be sure that the stars are truly the cause of the changes. The changes could be caused by some­thing much closer to Earth. They also can’t be sure that they haven’t made a mis­take in how they have in­ter­preted and filtered their data.

To avoid ac­ci­den­tally see­ing some­thing in noisy data that isn’t there, sci­en­tists will take mul­ti­ple, in­de­pen­dent mea­sure­ments of some­thing hap­pen­ing. If ten peo­ple all in­de­pen­dently ob­serve a dragon fight­ing a uni­corn or a grav­i­ta­tional wave from col­lid­ing black holes in a dis­tant galaxy, they were prob­a­bly not all hal­lu­ci­nat­ing. How­ever, if ten peo­ple are all look­ing at a blurry, filtered, and en­hanced image and they all agree that it is prob­a­bly a pic­ture of a dragon fight­ing a uni­corn, they might be fool­ing them­selves and look­ing at some­thing much more mun­dane.

In the case of grav­i­ta­tional waves, back in the 1970s, hun­dreds of in­de­pen­dent re­search groups con­structed sim­ple de­vices to mea­sure them and they all com­pared their re­sults. Each re­search group thought that it had mea­sured grav­i­ta­tional waves, but when the re­sults were com­bined, they all had to con­clude that no one had been mea­sur­ing grav­i­ta­tional waves. They had all been mea­sur­ing differ­ent sources of noise.

To­day, we have a new sort of grav­i­ta­tional wave de­tec­tor that is very ex­pen­sive and there are only three of them in ex­is­tence. They all be­lieve that they are mea­sur­ing grav­i­ta­tional waves, but it is pos­si­ble that they are all mea­sur­ing differ­ent sources of noise be­cause it is difficult to get enough re­sults to com­pare with only three, in­de­pen­dent mea­sure­ment de­vices.

When eval­u­at­ing a sci­en­tific re­sult, it is im­por­tant to re­mem­ber that raw mea­sure­ments should always be be­lieved, but the in­ter­pre­ta­tion we give to those mea­sure­ments should always be treated skep­ti­cally be­cause you might be mea­sur­ing some­thing you hadn’t in­tended to mea­sure. It is also im­por­tant to be wary of those who con­struct a re­sult by com­bin­ing bi­ased, filtered, or cal­ibrated mea­sure­ments.

In con­clu­sion, it is a good idea to be wary of sci­en­tists’ con­clu­sions but to trust in the raw data. The con­clu­sions might be bi­ased by be­liefs, even if the re­sult was ‘peer-re­viewed’. Groups of sci­en­tists can be just as un­aware of their blind-spots and bi­ases as in­di­vi­d­ual sci­en­tists.

The sci­en­tific method it­self is the source of the power of sci­ence. This power is not con­tained within the peer-re­view sys­tem or within the com­mu­nity, es­pe­cially when that com­mu­nity is mo­ti­vated to see things that are not there.