American Journal of Law & Medicine

Equal treatment for regulatory science: extending the controls governing the quality of public research to private research.


The imperative that agencies use "sound science" in developing their regulations has become a major preoccupation of the political branches. In only a few years, Congress passed two appropriations riders that provide extensive new mechanisms for the public to critique the science used by agencies. (1) The executive branch quickly followed suit, promulgating regulations to implement these two laws, as well as proceeding on its own "sound science" missions. In the space of less than one year, the Office of Management and Budget ("OMB") circulated for public comment draft peer review requirements for the scientific review of agency science, (2) and the Environmental Protection Agency ("EPA") launched a full scale program to improve the quality of the models it uses in regulation, (3) as well as "Assessment Criteria" to be used by agency officials in reviewing the quality of third-party (primarily state) science. (4) This near-obsession with the quality of regulatory science has become so serious that industry consultants sent letters to major universities warning them that any research their faculty produces that is later used for regulation must meet the government's multifaceted "sound science" requirements. (5) Even federal courts have become involved by presiding over a complaint that the government's climate change models are not reliable and should be withdrawn from public dissemination. (6)

At the same time that "sound science" reforms are proliferating, there is a surge in academic concern about the objectivity and quality of private or "sponsored" science used for public policy. Regulated parties who sponsor research that informs regulation of their products or activities have incentives to influence the research in ways that ensure favorable outcomes. Yet since research design and reporting is inherently layered with discretionary judgments that are difficult to discern without replicating the research directly, systemic biases in these judgments are difficult to detect from the outside. As long as sponsors control the research at some or all points in the research process, adverse results can be suppressed and the design and reporting of experiments can be biased in ways that produce results that support the sponsor's interests, rather than offer a disinterested examination of potential harms.

Despite their rather obvious points of convergence, these two sets of concerns have remained separate over the past decade. Worrisome evidence of compromised private research is effectively ignored as the "sound science" reforms take aim primarily at publicly funded research. (7) As a result, oversight of the quality of regulatory science is growing increasingly bimodal: public research is subject to increased scrutiny, while private research remains largely insulated from outside review and meaningful agency oversight.

In this Article, we argue that to the extent there is a problem with regulatory science in health and safety regulation, the "sound science" reforms miss the target by taking aim at public, rather than private science. We develop this argument in three parts. First, in Part II of the Article, we identify the critical role that private information plays in regulation, and how under-reporting of harms could lead to far greater harms and risks than society is willing to tolerate. We then present evidence supporting a conclusion that private research is often compromised, especially as compared to federally funded research, in ways that underreport adverse effects and lead to a misleadingly rosy picture of the safety of a sponsor's products or wastes. Next, in Part III, we identify how the laws, and especially the "sound science" reforms, get the problem precisely backward by focusing oversight checks on federally funded research and exempting, or at least providing far less internal and external oversight of, research sponsored by affected parties. Finally, in Part IV, we describe ways to equalize the review of publicly and privately sponsored research. In the absence of this equal treatment, regulated parties will continue to have few incentives to produce private research of high quality, while at the same time they will critique public research when the findings are adverse to their interests.


Public health regulators make life and death decisions when they promulgate standards to protect the public health. If the research they rely upon to make these decisions is compromised, then there may be more losses, perhaps substantially more, than the regulators or the public onlookers are willing to tolerate. An accumulating body of evidence suggests that some of the private science that forms the primary, and sometimes the exclusive, input for regulatory decisions regarding public health and safety lacks important scientific safeguards that could result in research that underreports harms to health and the environment. In this Part, we first discuss the important role that private science plays in regulation. We then turn to the ways in which the harms in this sponsored science might be underreported by sponsors who reserve control over the research.


Privately sponsored science often provides the exclusive information for making decisions about the safety of pesticides and chemicals. Under both the Federal Insecticide, Fungicide, and Rodenticide Act ("FIFRA") (8) and the Toxic Substances Control Act ("TSCA"), (9) manufacturers of new products are required to provide the agency with all available information on the safety of the products as a condition to marketing, and in some cases are required to conduct new research on product safety. (10) Manufacturers who market existing pesticides and chemicals are also occasionally required to conduct research to help regulators assess the product's safety. (11) Many of these mandatory tests are specified under relatively rigid protocols that leave little room for discretionary reporting. (12) But as tests become more substance-specific and less capable of being conducted in a controlled laboratory setting--for example, studying reproductive and developmental effects in organisms exposed to a substance in the environment--the amount of researcher discretion in the design and reporting of findings inevitably increases.

The laws that regulate the release of pollutants depend less fundamentally on private research in setting regulatory standards, but nevertheless make use of any science that is available, including privately sponsored science. As a result, risk assessments used to set contaminant levels in drinking water and exposure standards for worker protection are often based in part on private science. (13) This voluntarily produced research, in contrast to mandated research produced under the pesticide and chemical regulation statutes, is typically done without the benefit of rigid protocols and thus its quality is even more difficult to evaluate.


At the same time that privately sponsored research provides a critical input to regulation, there is growing evidence that it can be compromised in ways that might underreport or even suppress evidence of harm. Sponsors face strong incentives to design and report research in ways most favorable to their interests and to suppress adverse results provided they can do so without detection. In the past, more than a few products or pollutants have been left effectively unregulated because the manufacturer or polluter concealed evidence of the true harm or obscured adverse results. Privately sponsored science, if done without guarantees of research independence, thus violates one of the most fundamental norms of science; namely, that research be disinterested. (14)

Evidence of underreporting of harms in private research is most common in the biomedical arena, although there is growing evidence in the environmental and public health arenas as well. (15) Unfortunately, many of these unscientific practices are missed by regulators. (16) In a world with infinite resources, any biases that infect research would ultimately be caught through third-party, disinterested replication of the research. Given the scarce resources and considerable scientific gaps in environmental regulation, however, resources are rarely if ever available to replicate the scant research that does exist. In addition, the trade secret classification of the chemical composition of many of these products, coupled with the lack of public funding, means that the amount of public replication of private research results is limited. As a result, sponsors often enjoy an effective monopoly on the scientific information base regarding their products. The ways that privately sponsored science can be and has been compromised are discussed below.

1. Falsification of Data and Research Findings

Falsification of research is the most serious, but fortunately the least common, problem with privately sponsored research used for regulation. Falsification is difficult for regulators to detect, short of replicating the research, but because the penalties for committing fraud are often devastating, sponsors generally avoid this means of manipulating research. Criminal and civil sanctions, impaired firm reputation, and distrust by regulators all can result from a single falsified study. (17) Moreover and in any case, there may be ways short of fraud to control the outcome of research as discussed below.

Yet even though falsification of research in regulation is uncommon, it is not unprecedented. The most notorious examples of fraudulent research in environmental regulation occurred with a contractor who falsified a number of results in conducing required safety testing for pesticide manufacturers in the 1970s. (18) These data fabrications saved the consulting organization time and resources, but were not evidently intended to produce preordained results for specific pesticides. (19) Falsification of measurements collected as part of mandatory self-monitoring requirements has also been documented. For example, the Coal Mine Health and Safety Act of 1969 (20) requires coal operators to collect bi-monthly air samples of the underground work environment to identify excess levels of coal mine dust in order to reduce the risk of coal workers pneumoconiosis among the miners. (21) The mine operator sends the dust exposure samples he collects to a U.S. Department of Labor's Mine Safety and Health Administration ("MSHA") laboratory for analysis, and if the results exceed a permissible level, the mine operator receives a citation and monetary penalty. (22) When these provisions were originally proposed, coal miners scoffed at the idea, likening it to self-enforcement for traffic violations; imagine a system when the driver is asked to voluntarily send the state police a notice that they have driven over the speed limit so they can be sent a traffic ticket. Widespread abuses of the self-reporting system were uncovered in the 1990s, when the MSHA laboratory discovered that mine operators had tampered with hundreds of dust samples. Suspicious samples were identified as coming from approximately one-third of the mines covered by the law; more than 200 mine operators (including at least one of the nation's largest) and their contractors were eventually convicted on criminal charges. (23)

2. Ends-Oriented Biases in Design and Reporting of Research

Sponsors can also design or report regulation-relevant research in ways that are favorable to their interests, but fall short of being clearly fraudulent or dishonest. (24) In the design of the research, there are often choices to be made by the researcher about test subjects, laboratory conditions, lengths of time of the study, and what types of observations to report, even for rigidly specified protocols. (25) In a self-designed study of the effects of pesticides on birds, for example, the researcher might make decisions about which effects to notice and record in the data log, and then later, which effects to statistically analyze, if each of these incremental discretionary decisions is made in a way most favorable to the sponsor, the results can ultimately tend toward one side of the results spectrum. (26)

Similarly, decisions about how to report effects in a study can be affected by a researchers' predisposition towards the outcome. Some adverse effects can be downplayed or explained away in the written findings, while the positive outcomes of the study can be overemphasized. In one study of 192 random clinical trials conducted on prospective drugs, for example, the researchers found that the written reports of the research did not adequately describe the adverse effects of the drugs under study or explain why a patient stopped taking the drug. (27)

Evidence that parties with direct conflicts of interest can somtemes design and report results in ways that are favorable to their interests, rather than in ways that best represent the research, has been extensively documented. (28) The "funding effect," where the results of privately sponsored research are statistically compared against the results of publicly funded research on similar regulation-relevant questions, shows consistent and rather dramatic sponsor-bias in the final results. (29) For example, one study published in the Journal of the American Medical Association reports: "By combining data from articles examining 1140 studies, we found that industry-sponsored studies were significantly more likely to reach conclusions that were favorable to the sponsor than were non-industry studies." (30) In research of the tobacco industry, there is even statistical evidence that this sponsored research is of lower quality, a conclusion based on findings of independent reviewers who were blinded to identifying characteristics of the affiliations of the authors. (31) Although the funding effect shows only a correlation and does not prove or explain bias in the design or reporting of findings of sponsored research, biases (or strong financial conflicts) remain one of the leading explanations for the effect. (32)

Other evidence of undue sponspor influence in regulation-relevant research is more anecdotal, but nevertheless worrisome. In a number of individual research projects, some sponsors have exerted dramatic control over the outcome of the research, to the point of designing the study, framing the research question, and even editing and ghost-writing the article by hiring scientists willing to "collaborate" closely with the sponsoring industry under contracts that require sponsor control of the research. (33)

Additionally, several prominent scientific journal editors lament the ways regulated parties have abused publication practices to provide a misleadingly positive picture of the body of research that has bearing on their products. Some sponsors, for example, have been caught publishing the same study in different journals under different author names with no cross-references, making it appear that the research support in favor of their product or activity is based on several independent studies, rather than simply a re-reporting of the same findings. (34) Since commissioned studies are viewed in the scientific community as being less credible than studies without affected sponsors, disclaimers are increasingly required as a condition to publication. 35) To circumvent this requirement, some sponsors have developed ways to "launder" their research support through nonprofit shells to create the illusion that they play no role in research that supports their interest. (36) Parties trying to influence regulation have also commissioned review articles and convened expert panels that purport to summarize existing research on a topic--such as the health effects of environmental tobacco smoke--even though in reality the commissioned review articles or reports are intended (and contractually guaranteed) to portray existing research in the light most favorable to the sponsor. (37)

3. Suppression of Adverse Results

Finally and perhaps most serious is the ability of sponsors to suppress research when the results are adverse to their interests. Unlike fraud, suppressing adverse results can sometimes be done with discretionary judgments that are not illegal. (38) For example, sponsors can abort research before it is completed, and base this decision on limited resources or some purported design flaw in the study. For research that is completed, sponsors can still justify withholding the results based on discretionary judgments that the research design or reporting was incomplete or flawed in some way or that follow-up research is needed to confirm or validate the findings. (39) All of these judgments are difficult to question from the outside and can often be justified, however weakly, even if the suppression is discovered.

In practice, suppression of research has been a recurring problem with privately sponsored research. Sponsors sometimes contractually reserve the right to suppress publication of the research they fund and are not reticent to use this right if the study results are adverse to their interests. (40) Some corporate actors have selectively limited access to potentially damaging information about their products and activities in ways that substantially harmed public health. (41) For example, Johnson & Johnson, (42) A.H. Robins, (43) Merrell Dow, (44) and the asbestos, (45) vinyl chloride, (46) and tobacco (47) industries were all caught concealing information about their products' adverse health impacts. The manufacturer of an antidepressant, Paxil, was recently sued by New York State for concelaing unfavorable results from clinical trials done on children, leading to demands from the scientific and medical community that pharmaceutical companies be required to publicly disclose the results of all clinical trials, regardless of whether reporting of the results of the research is legally mandated. (48) In the occupational health arena, a textile manufacturing company-wielding a confidentiality agreement--pressured occupational medicine researchers to suppress data showing adverse effects on workers in the nylon flocking industry. (49) A large number of companies have also resisted mandatory reporting requirements on the adverse effects of their products. (50)


As the previous section details, the quality of privately sponsored research is often compromised by bias, yet environmental regulatory decisions nevertheless must depend upon it in setting protective standards. As a result, public health and environmental regulatory decisions based on private science could systematically underestimate the risks of a product or waste stream.

By contrast, publicly funded research, by virtue of its greater assurance of research independence, would seem to be much less inclined to be encumbered with systematic biases that affect research findings. (51) The diverse motives and backgrounds of the researchers doing public health research, which generally include scientists from consultant laboratories, EPA, and academia, further dissipate the likelihood that there will be systematic biases that lean dramatically one way or another. This is borne out in empirical studies of research. (52) In fact, the "sound science" proponents fail to provide evidence of significant problems with publicly funded science used in public health regulation. (53)

Yet despite the higher probability of bias in private research relative to public research, most "sound science" laws and regulations focus peer review, external complaint processes, and other quality controls almost exclusively on public research or syntheses of research findings. (54) At the same time, they exempt a good portion of private research from their requirements. Private research is also exempted from public scrutiny through guarantees afforded "proprietary information" and "confidential business information" ("CBI"). (55) The laws and regulations, in other words, do precisely the opposite from what the underlying quality of the research would demand. They tend to insulate private research from scrutiny and focus attention on public research.

The ways that the quality of private research is under-regulated in relation to public research are detailed in this section.


A great deal of private science is classified and reviewed by only a few, cleared government officials, despite the fact that open communication of research is a tenet of good science. (56) Most classification of private research is based on the protection of industry "trade secrets" and is intended primarily to protect proprietary formulas and manufacturing processes from use by competitors. (57) Current regulatory programs provide regulated parties with the option of classifying any information that they believe could be used by a competitor to their economic detriment. (58) As a result, manufacturers and polluters have been given wide latitude under at least FIFRA and TSCA to classify health and safety research that they believe can cause economic harm as confidential business information, often without specifying the nature of the trade secret concerns. Once the CBI claim is asserted by a regulated party, the claim of "trade secret" is generally considered valid (59) by the EPA until a party requests the information under the Freedom of Information Act ("FOIA"). (60) Health and safety studies (as well as most routine claims on the corresponding chemical identity of a toxic substance) are among the information classified by industry as CBI, (61) even though the laws expressly disfavor this classification. …

Log in to your account to read this article – and millions more.