A brief history of ethically concerned scientists

For the first time in his­tory, it has be­come pos­si­ble for a limited group of a few thou­sand peo­ple to threaten the ab­solute de­struc­tion of mil­lions.

-- Nor­bert Wiener (1956), Mo­ral Reflec­tions of a Math­e­mat­i­cian.

To­day, the gen­eral at­ti­tude to­wards sci­en­tific dis­cov­ery is that sci­en­tists are not them­selves re­spon­si­ble for how their work is used. For some­one who is in­ter­ested in sci­ence for its own sake, or even for some­one who mostly con­sid­ers re­search to be a way to pay the bills, this is a tempt­ing at­ti­tude. It would be easy to only fo­cus on one’s work, and leave it up to oth­ers to de­cide what to do with it.

But this is not nec­es­sar­ily the at­ti­tude that we should en­courage. As tech­nol­ogy be­comes more pow­er­ful, it also be­comes more dan­ger­ous. Through­out his­tory, many sci­en­tists and in­ven­tors have rec­og­nized this, and taken differ­ent kinds of ac­tion to help en­sure that their work will have benefi­cial con­se­quences. Here are some of them.

This post is not ar­gu­ing that any spe­cific ap­proach for tak­ing re­spon­si­bil­ity for one’s ac­tions is the cor­rect one. Some re­searchers hid their work, oth­ers re­fo­cused on other fields, still oth­ers be­gan ac­tive cam­paigns to change the way their work was be­ing used. It is up to the reader to de­cide which of these ap­proaches were suc­cess­ful and worth em­u­lat­ing, and which ones were not.

Pre-in­dus­trial inventors

… I do not pub­lish nor di­vulge [meth­ods of build­ing sub­marines] by rea­son of the evil na­ture of men who would use them as means of de­struc­tion at the bot­tom of the sea, by send­ing ships to the bot­tom, and sink­ing them to­gether with the men in them.

-- Leonardo da Vinci

Peo­ple did not always think that the benefits of freely dis­sem­i­nat­ing knowl­edge out­weighed the harms. O.T. Ben­fey, writ­ing in a 1956 is­sue of the Bul­letin of the Atomic Scien­tists, cites F.S. Tay­lor’s book on early al­chemists:

Alchemy was cer­tainly in­tended to be use­ful …. But [the al­chemist] never pro­poses the pub­lic use of such things, the dis­clos­ing of his knowl­edge for the benefit of man. …. Any dis­clo­sure of the al­chem­i­cal se­cret was felt to be profoundly wrong, and likely to bring im­me­di­ate pun­ish­ment from on high. The rea­son gen­er­ally given for such se­crecy was the prob­a­ble abuse by wicked men of the power that the al­chem­i­cal would give …. The al­chemists, in­deed, felt a strong moral re­spon­si­bil­ity that is not always ac­knowl­edged by the sci­en­tists of to­day.

With the Re­nais­sance, sci­ence be­gan to be viewed as pub­lic prop­erty, but many sci­en­tists re­mained cau­tious about the way in which their work might be used. Although he held the office of mil­i­tary en­g­ineer, Leonardo da Vinci (1452-1519) drew a dis­tinc­tion be­tween offen­sive and defen­sive war­fare, and em­pha­sized the role of good defenses in pro­tect­ing peo­ple’s liberty from tyrants. He de­scribed war as ‘bes­tial­is­sima pazzia’ (most bes­tial mad­ness), and wrote that ‘it is an in­finitely atro­cious thing to take away the life of a man’. One of the clear­est ex­am­ples of his re­luc­tance to un­leash dan­ger­ous in­ven­tions was his re­fusal to pub­lish the de­tails of his plans for sub­marines.

Later Re­nais­sance thinkers con­tinued to be con­cerned with the po­ten­tial uses of their dis­cov­er­ies. John Napier (1550-1617), the in­ven­tor of log­a­r­ithms, also ex­per­i­mented with a new form of ar­tillery. Upon see­ing its de­struc­tive power, he de­cided to keep its de­tails a se­cret, and even spoke from his deathbed against the cre­ation of new kinds of weapons.

But only con­ceal­ing one dis­cov­ery pales in com­par­i­son to the likes of Robert Boyle (1627-1691). A pi­o­neer of physics and chem­istry and pos­si­bly the most fa­mous for de­scribing and pub­lish­ing Boyle’s law, he sought to make hu­man­ity bet­ter off, tak­ing an in­ter­est in things such as im­proved agri­cul­tural meth­ods as well as bet­ter medicine. In his stud­ies, he also dis­cov­ered knowl­edge and made in­ven­tions re­lated to a va­ri­ety of po­ten­tially harm­ful sub­jects, in­clud­ing poi­sons, in­visi­ble ink, coun­terfeit money, ex­plo­sives, and ki­netic weaponry. Th­ese ‘my love of Mankind has oblig’d me to con­ceal, even from my near­est Friends’.

Chem­i­cal warfare

By the early twen­tieth cen­tury, peo­ple had be­gan look­ing at sci­ence in an in­creas­ingly op­ti­mistic light: it was be­lieved that sci­ence would not only con­tinue to im­prove ev­ery­one’s pros­per­ity, but also make war out­right im­pos­si­ble. But as sci­ence be­came more so­phis­ti­cated, it would also be­come pos­si­ble to cause ever more harm with ever smaller re­sources. One of the early in­di­ca­tions of sci­ence’s abil­ity to do harm came from ad­vances in chem­i­cal war­fare, and World War I saw the de­ploy­ment of chlo­rine, phos­gene, and mus­tard gas as weapons. It should not be sur­pris­ing, then, that some sci­en­tists in re­lated fields be­gan grow­ing con­cerned. But un­like ear­lier in­ven­tors, at least three of them did far more than just re­fuse to pub­lish their work.

Clara Im­mer­wahr (1870-1915) was a Ger­man chemist and the first woman to ob­tain a PhD from the Univer­sity of Bres­lau. She was strongly op­posed to the use of chem­i­cal weapons. Mar­ried to Fritz Haber, ‘the father of chem­i­cal war­fare’, she un­suc­cess­fully at­tempted many times to con­vince her hus­band to aban­don his work. Im­mer­wahr was gen­er­ally de­pressed and mis­er­able over the fact that so­ciety con­sid­ered a mar­ried woman’s place to be at home, deny­ing her the op­por­tu­nity to do sci­ence. In the end, af­ter her efforts to dis­suade her hus­band from work­ing on chem­i­cal war­fare had failed and Fritz had per­son­ally over­seen the first ma­jor use of chlo­rine, she com­mit­ted suicide by shoot­ing her­self in the heart.

Poi­son gas also con­cerned sci­en­tists in other dis­ci­plines. Lewis Fry Richard­son (1881-1953) was a math­e­mat­i­cian and me­te­o­rol­o­gist. Dur­ing the World War II, the mil­i­tary be­came in­ter­ested in his work on tur­bu­lence and gas mix­ing, and at­tempted to re­cruit him to do help them do work on mod­el­ing the best ways of us­ing poi­son gas. Real­iz­ing what his work was be­ing used for, Richard­son aban­doned me­te­o­rol­ogy en­tirely and de­stroyed his un­pub­lished re­search. In­stead, he turned his re­search to in­ves­ti­gat­ing the causes of war, at­tempt­ing to find ways to re­duce the risk of armed con­flict. He spent the rest of his life de­voted to this topic, and is to­day con­sid­ered one of the founders of the sci­en­tific anal­y­sis of con­flict.

Arthur Gals­ton (1920-2008), a botanist, was also con­cerned with the mil­i­tary use of his in­ven­tions. Build­ing upon his work, the US mil­i­tary de­vel­oped Agent Orange, a chem­i­cal weapon which was de­ployed in the Viet­nam War. Upon dis­cov­er­ing what his work had been used for, he be­gan to cam­paign against its use, and to­gether with a num­ber of oth­ers fi­nally con­vinced Pres­i­dent Nixon to or­der an end to its spray­ing in 1970. Reflect­ing upon the mat­ter, Gals­ton wrote:

I used to think that one could avoid in­volve­ment in the an­ti­so­cial con­se­quences of sci­ence sim­ply by not work­ing on any pro­ject that might be turned to evil or de­struc­tive ends. I have learned that things are not all that sim­ple, and that al­most any sci­en­tific find­ing can be per­verted or twisted un­der ap­pro­pri­ate so­cietal pres­sures. In my view, the only re­course for a sci­en­tist con­cerned about the so­cial con­se­quences of his work is to re­main in­volved with it to the end. His re­spon­si­bil­ity to so­ciety does not cease with pub­li­ca­tion of a defini­tive sci­en­tific pa­per. Rather, if his dis­cov­ery is trans­lated into some im­pact on the world out­side the lab­o­ra­tory, he will, in most in­stances, want to fol­low through to see that it is used for con­struc­tive rather than anti-hu­man pur­poses.

After re­tiring in 1990, he founded the In­ter­dis­ci­plinary Cen­ter for Bioethics at Yale, where he also taught bioethics to un­der­grad­u­ates.

Nu­clear weapons

While chem­i­cal weapons are ca­pa­ble of in­flict­ing se­ri­ous in­juries as well as birth defects on large num­bers of peo­ple, they have never been viewed to be as dan­ger­ous as nu­clear weapons. As physi­cists be­came ca­pa­ble of cre­at­ing weapons of un­par­alleled de­struc­tive power, they also be­gan grow­ing ever more con­cerned about the con­se­quences of their work.

Leó Szilárd (1898-1964) was one of the first peo­ple to en­vi­sion nu­clear weapons, and was granted a patent for the nu­clear chain re­ac­tion in 1934. Two years later, he grew wor­ried that Nazi sci­en­tists would find his patents and use them to cre­ate weapons, so he asked the Bri­tish Pa­tent Office to with­draw his patents and se­cretly re­as­sign them to the Royal Navy. His fear of Nazi Ger­many de­vel­op­ing nu­clear weapons also made him in­stru­men­tal in mak­ing the USA ini­ti­ate the Man­hat­tan Pro­ject, as he and two other sci­en­tists wrote the Ein­stein-Szilárd let­ter that ad­vised Pres­i­dent Roo­sevelt of the need to de­velop the same tech­nol­ogy. But in 1945, he learned that the atomic bomb was about to be used on Ja­pan, de­spite it be­ing cer­tain that nei­ther Ger­many nor Ja­pan had one. He then did his best to stop them from be­ing used and started a pe­ti­tion against us­ing them, with lit­tle suc­cess.

After the war, he no longer wanted to con­tribute to the cre­ation of weapons and changed fields to molec­u­lar biol­ogy. In 1962, he founded the Coun­cil for a Liv­able World, which aimed to warn peo­ple about the dan­gers of nu­clear war and to pro­mote a policy of arms con­trol. The Coun­cil con­tinues its work even to­day.

Another physi­cist who worked on the atomic bomb due to a fear of it be­ing de­vel­oped by Nazi Ger­many was Joseph Rot­blat (1908-2005), who felt that the Allies also hav­ing an atomic bomb would de­ter the Axis from us­ing one. But he grad­u­ally be­gan to re­al­ize that Nazi Ger­many would likely never de­velop the atomic bomb, de­stroy­ing his ini­tial ar­gu­ment for work­ing on it. He also came to re­al­ize that the bomb con­tinued to be un­der ac­tive de­vel­op­ment due to rea­sons that he felt were un­eth­i­cal. In con­ver­sa­tion, Gen­eral Les­lie Groves men­tioned that the real pur­pose of the bomb was to sub­due the USSR. Rot­blat was shocked to hear this, es­pe­cially given that the Soviet Union was at the time an ally in the war effort. In 1944, it be­came ap­par­ent that Ger­many would not de­velop the atomic bomb. As a re­sult, Rot­blat asked for per­mis­sion to leave the pro­ject, and was granted it.

After­wards, Rot­blat re­gret­ted his role in de­vel­op­ing nu­clear weapons. He be­lieved that the logic of nu­clear de­ter­rence was flawed, since he thought that if Hitler had pos­sessed an atomic bomb, then Hitler’s last or­der would have been to use it against Lon­don re­gard­less of the con­se­quences. Rot­blat de­cided to do what­ever he could to pre­vent the fu­ture use and de­ploy­ment of nu­clear weapons, and pro­posed a wor­ld­wide mora­to­rium on such re­search un­til hu­man­ity was wise enough to use it with­out risks. He de­cided to re­pur­pose his ca­reer into some­thing more use­ful for hu­man­ity, and be­gan study­ing and teach­ing the ap­pli­ca­tion of nu­clear physics into medicine, be­com­ing a pro­fes­sor at the Med­i­cal Col­lege of St Bartholomew’s Hospi­tal in Lon­don.

Rot­blat worked to­gether with Ber­trand Rus­sell to limit the spread of nu­clear weapons, and the two col­lab­o­rated with a num­ber of other sci­en­tists to is­sue the Rus­sell-Ein­stein Man­i­festo in 1955, call­ing the gov­ern­ments of the world to take ac­tion to pre­vent nu­clear weapons from do­ing more dam­age. The man­i­festo led to the es­tab­lish­ment of the Pug­wash Con­fer­ences, in which nu­clear sci­en­tists from both the West and the East met each other. By fa­cil­i­tat­ing di­alogue be­tween the two sides of the Cold War, these con­fer­ences helped lead to sev­eral arms con­trol agree­ments, such as the Par­tial Test Ban Treaty of 1963 and the Non-Pro­lifer­a­tion Treaty of 1968. In 1995, Rot­blat and the Pug­wash Con­fer­ences were awarded the No­bel Peace Prize “for their efforts to diminish the part played by nu­clear arms in in­ter­na­tional poli­tics and, in the longer run, to elimi­nate such arms”.

The de­vel­op­ment of nu­clear weapons also af­fected Nor­bert Wiener (1894-1964), pro­fes­sor of math­e­mat­ics at the Mas­sachusetts In­sti­tute of Tech­nol­ogy and the origi­na­tor of the field of cy­ber­net­ics. After the Hiroshima bomb­ing, a re­searcher work­ing for a ma­jor air­craft cor­po­ra­tion re­quested a copy of an ear­lier pa­per of Wiener’s. Wiener re­fused to provide it, and sent At­lantic Monthly a copy of his re­sponse to the re­searcher, in which he de­clared his re­fusal to share his re­search with any­one who would use it for mil­i­tary pur­poses.

In the past, the com­mu­nity of schol­ars has made it a cus­tom to fur­nish sci­en­tific in­for­ma­tion to any per­son se­ri­ously seek­ing it. How­ever, we must face these facts: The policy of the gov­ern­ment it­self dur­ing and af­ter the war, say in the bomb­ing of Hiroshima and Na­gasaki, has made it clear that to provide sci­en­tific in­for­ma­tion is not a nec­es­sar­ily in­no­cent act, and may en­tail the gravest con­se­quences. One there­fore can­not es­cape re­con­sid­er­ing the es­tab­lished cus­tom of the sci­en­tist to give in­for­ma­tion to ev­ery per­son who may in­quire of him. The in­ter­change of ideas, one of the great tra­di­tions of sci­ence, must of course re­ceive cer­tain limi­ta­tions when the sci­en­tist be­comes an ar­biter of life and death. [...]
The ex­pe­rience of the sci­en­tists who have worked on the atomic bomb has in­di­cated that in any in­ves­ti­ga­tion of this kind the sci­en­tist ends by putting un­limited pow­ers in the hands of the peo­ple whom he is least in­clined to trust with their use. It is perfectly clear also that to dis­sem­i­nate in­for­ma­tion about a weapon in the pre­sent state of our civ­i­liza­tion is to make it prac­ti­cally cer­tain that that weapon will be used. [...]
If there­fore I do not de­sire to par­ti­ci­pate in the bomb­ing or poi­son­ing of defense­less peo­ples-and I most cer­tainly do not-I must take a se­ri­ous re­spon­si­bil­ity as to those to whom I dis­close my sci­en­tific ideas. Since it is ob­vi­ous that with suffi­cient effort you can ob­tain my ma­te­rial, even though it is out of print, I can only protest pro forma in re­fus­ing to give you any in­for­ma­tion con­cern­ing my past work. How­ever, I re­joice at the fact that my ma­te­rial is not read­ily available, inas­much as it gives me the op­por­tu­nity to raise this se­ri­ous moral is­sue. I do not ex­pect to pub­lish any fu­ture work of mine which may do dam­age in the hands of ir­re­spon­si­ble mil­i­tarists.
I am tak­ing the liberty of call­ing this let­ter to the at­ten­tion of other peo­ple in sci­en­tific work. I be­lieve it is only proper that they should know of it in or­der to make their own in­de­pen­dent de­ci­sions, if similar situ­a­tions should con­front them.

Re­com­bi­nant DNA

For a large part of his­tory, sci­en­tists’ largest eth­i­cal con­cerns came from di­rect mil­i­tary ap­pli­ca­tions of their in­ven­tions. While any in­ven­tion could lead to un­in­tended so­cietal or en­vi­ron­men­tal con­se­quences, for the most part re­searchers who worked on peace­ful tech­nolo­gies didn’t need to be too con­cerned with their work be­ing dan­ger­ous by it­self. But as biolog­i­cal and med­i­cal re­search ob­tained the ca­pa­bil­ity to mod­ify genes and bac­te­ria, it would open up the pos­si­bil­ity of un­in­ten­tion­ally cre­at­ing dan­ger­ous in­fec­tious dis­eases. In the­ory, these could be even more dan­ger­ous than nu­clear weapons—an a-bomb dropped on a city might de­stroy most of that city, but a sin­gle bac­te­ria could give rise to an epi­demic in­fect­ing peo­ple all around the world.

Re­com­bi­nant DNA tech­niques in­volve tak­ing DNA from one source and then in­tro­duc­ing it to an­other kind of or­ganism, caus­ing the new genes to ex­press them­selves in the tar­get or­ganism. One of the pi­o­neers of this tech­nique was Paul Berg (1926-), who in 1972 had already car­ried out the prepa­ra­tions for cre­at­ing a strain of E. coli that con­tained the genome for a hu­man-in­fec­tious virus (SV40) with ten­ta­tive links to can­cer. Robert Pol­lack (1920-) heard news of this ex­per­i­ment and helped con­vince Berg to halt it—both were con­cerned about the dan­ger that this new strain would spread to hu­mans in the lab and be­come a pathogen. Berg then be­came a ma­jor voice call­ing for more at­ten­tion to the risks of such re­search as well as a tem­po­rary mora­to­rium. This even­tu­ally led to two con­fer­ences in Asilo­mar, with 140 ex­perts par­ti­ci­pat­ing in the later 1975 one to de­cide upon guidelines for re­com­bi­nant DNA re­search.

Berg and Pol­lack were far from the only sci­en­tists to call at­ten­tion to the safety con­cerns of re­com­bi­nant DNA. Sev­eral other sci­en­tists con­tributed, ask­ing for more safety and voic­ing con­cern about a tech­nol­ogy that could bring harm if mi­sused.

Among them, the molec­u­lar biol­o­gist Max­ine Singer (1931-) chaired the 1973 Gor­don Con­fer­ence on Nu­cleic Acids, in which some of the dan­gers of the tech­nique were dis­cussed. After the con­fer­ence, she and sev­eral other similarly con­cerned sci­en­tists au­thored a let­ter to the Pres­i­dent of the Na­tional Academy of Science and the Pres­i­dent of the In­sti­tutes of Health. The let­ter sug­gested that a study com­mit­tee be es­tab­lished to study the risks be­hind the new re­com­bi­nant DNA tech­nol­ogy, and pro­pose spe­cific ac­tions or guidelines if nec­es­sary. She also helped or­ga­nize the Asilo­mar Con­fer­ence in 1975.


But if we are down­loaded into our tech­nol­ogy, what are the chances that we will there­after be our­selves or even hu­man? It seems to me far more likely that a robotic ex­is­tence would not be like a hu­man one in any sense that we un­der­stand, that the robots would in no sense be our chil­dren, that on this path our hu­man­ity may well be lost.

-- Bill Joy, Why the Fu­ture Doesn’t Need Us.

Fi­nally, we come to the topic of in­for­ma­tion tech­nol­ogy and ar­tifi­cial in­tel­li­gence. As AI sys­tems grow in­creas­ingly au­tonomous, they might be­come the ul­ti­mate ex­am­ple of a tech­nol­ogy that seems ini­tially in­nocu­ous but ends up ca­pa­ble of do­ing great dam­age. Espe­cially if they were to be­come ca­pa­ble of rapid self-im­prove­ment, they could lead to hu­man­ity go­ing ex­tinct.

In ad­di­tion to re­fus­ing to help mil­i­tary re­search, Nor­bert Wiener was also con­cerned about the effects of au­toma­tion. In 1949, Gen­eral Elec­tric wanted him to ad­vise its man­agers on au­toma­ton mat­ters and to teach au­toma­tion meth­ods to its en­g­ineers. Wiener re­fused these re­quests, be­liev­ing that they would fur­ther a de­vel­op­ment which would lead to hu­man work­ers be­com­ing un­em­ployed and re­placed by ma­chines. He thus ex­panded his boy­cott of the mil­i­tary to also be a boy­cott of cor­po­ra­tions that he thought acted un­eth­i­cally.

Wiener was also con­cerned about the risks of au­tonomous AI. In 1960, Science pub­lished his pa­per “Some Mo­ral and Tech­ni­cal Con­se­quences of Au­toma­tion”, in which he spoke at length about the dan­gers of ma­chine in­tel­li­gence. He warned that ma­chines might act far too fast for hu­mans to cor­rect their mis­takes, and that like ge­nies in sto­ries, they could fulfill the let­ter of our re­quests with­out car­ing about their spirit. He also dis­cussed such wor­ries el­se­where.

If we use, to achieve our pur­poses, a me­chan­i­cal agency with whose op­er­a­tion we can­not effi­ciently in­terfere once we have started it, be­cause the ac­tion is so fast and ir­re­vo­ca­ble that we have not the data to in­ter­vene be­fore the ac­tion is com­plete, then we had bet­ter be quite sure that the pur­pose put into the ma­chine is the pur­pose which we re­ally de­sire and not merely a col­or­ful imi­ta­tion of it.

Such wor­ries would con­tinue to bother other com­puter sci­en­tists as well, many decades af­ter Wiener’s death. Bill Joy (1954-) is known for hav­ing played a ma­jor role in the de­vel­op­ment of BSD Unix, hav­ing au­thored the vi text ed­i­tor, and be­ing the co-founder of Sun Microsys­tems. He be­came con­cerned about the effects of AI in 1998, when he met Ray Kurzweil at a con­fer­ence where they were both speak­ers. Kurzweil gave Joy a preprint of his then-up­com­ing book, The Age of Spiritual Machines, and Joy found him­self con­cerned over its dis­cus­sion about the risks of AI. Read­ing Hans Mo­ravec’s book Robot: Mere Ma­chine to Tran­scen­dent Mind ex­ac­er­bated Joy’s wor­ries, as did sev­eral other books which he found around the same time. He be­gan to won­der whether all of his work in the field of in­for­ma­tion tech­nol­ogy and com­put­ing had been prepar­ing the way for a world where ma­chines would re­place hu­mans.

In 2000, Joy wrote a widely-read ar­ti­cle ti­tled Why the Fu­ture Doesn’t Need Us for Wired, talk­ing about the dan­gers of AI as well as ge­netic en­g­ineer­ing and nan­otech­nol­ogy. In the ar­ti­cle, he called to limit the de­vel­op­ment of tech­nolo­gies which he felt were too dan­ger­ous. Since then, he has con­tinued to be ac­tive in pro­mot­ing re­spon­si­ble tech­nol­ogy re­search. In 2005, an op-ed co-au­thored by Joy and Ray Kurzweil was pub­lished in the New York Times, ar­gu­ing that the de­ci­sion to pub­lish the genome of the 1918 in­fluenza virus on the In­ter­net had been a mis­take.

Joy also at­tempted to write a book on the topic, but then be­came con­vinced that he could achieve more by work­ing on sci­ence and tech­nol­ogy in­vest­ment. In 2005, he joined the ven­ture cap­i­tal firm Kleiner Perk­ins Caufield & By­ers as a part­ner, and he has been fo­cused on in­vest­ments in green tech­nol­ogy.


Tech­nol­ogy’s po­ten­tial for de­struc­tion will only con­tinue to grow, but many of the so­cial norms of sci­ence were es­tab­lished un­der the as­sump­tion that sci­en­tists don’t need to worry much about how the re­sults of their work are used. Hope­fully, the ex­am­ples pro­vided in this post can en­courage more re­searchers to con­sider the broader con­se­quences of their work.

Sources used

This ar­ti­cle was writ­ten based on re­search done by Vin­cent Fagot. The sources listed be­low are in ad­di­tion to any that are already linked from the text.

Leonardo da Vinci:

John Napier:

Robert Boyle:

Clara Im­mer­wahr:

Lewis Fry Richard­son:

Arthur Gals­ton:

Leó Szilárd:

Joseph Rot­blat:

Nor­bert Wiener:

Paul Berg, Max­ine Singer, Robert Pol­lack: