[Question] Is there a reasonable reading according to which Baric, Shi et al 2015 isn’t gain-of-function research?

From the A SARS-like cluster of circulating bat coronaviruses shows potential for human emergence by Baric, Shi et al:

Wild-type SARS-CoV (Urbani), mouse-adapted SARS-CoV (MA15) and chimeric SARS-like CoVs were cultured on Vero E6 cells (obtained from United States Army Medical Research Institute of Infectious Diseases), grown in Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, CA) and 5% fetal clone serum (FCS) (Hyclone, South Logan, UT) along with antibiotic/​antimycotic (Gibco, Carlsbad, CA). DBT cells (Baric laboratory, source unknown) expressing ACE2 orthologs have been previously described for both human and civet; bat Ace2 sequence was based on that from Rhinolophus leschenaulti, and DBT cells expressing bat Ace2 were established as described previously8. Pseudotyping experiments were similar to those using an HIV-based pseudovirus, prepared as previously described10, and examined on HeLa cells (Wuhan Institute of Virology) that expressed ACE2 orthologs.

To me building chimeric viruses and then infact human cells (HeLa cells are human cells) looks like dangerous gain-of-function research. Fauci seems to argue that someone the NIH is able to define this work as not being gain-of-function research. To me this redefinition seems to be the bureaucratic way they circumvent the gain-of-function moratorium. Before the moratorium was imposed Fauci was arguing against it and the moratorium wasn’t imposed by anyone in the NIH or the HHS but the Office of Science and Technology Policy. To me that looks like a way to evade safety regulation by the NIH by dedefining terms because the NIH didn’t like the moratorium.

This question is about more then just assigning guilt for things that happened in 2015. If we want to prevent further risk, getting the NIH to accept that growing chimeric viruses that infect human cells is what the gain-of-function regulation is supposed to prevent seems to me to be very important.

It’s likely also a good case study for evading safety regulation and we should think about it in other context as well. If we end up with AI safety regulation, how do we prevent the people causing problems from just redefining the terms so that it doesn’t apply to them?

If anyone has a defense of not classifying this work as gain-of-function research I’m also happy to hear that.