Report on Research Compliance 19 no. 6 (June, 2022)
When Terry Magnuson resigned in April as research vice chancellor at the University of North Carolina (UNC) at Chapel Hill after admitting to three instances of plagiarism in the resubmission of a funding application, he blamed his “mistake” on being “over-extended” performing his academic duties as well as running a genetics lab.[1]
While Magnuson may be the most prominent investigator to get caught for this type of research misconduct—the other kinds are fabrication and falsification—his explanation wasn’t atypical, according to a new report by the National Science Foundation (NSF) Office of Inspector General (OIG) that reviewed 10 years of data. Nearly 30% of 137 individuals who were found to have committed plagiarism from 2007 to 2017 cited “time pressure” among their explanations, the Observations from NSF Plagiarism Investigations and Strategies to Prevent Plagiarism report shows.[2][3]
The extensive report does more than provide insights into the who and the why of plagiarism; it offers sometimes surprising demographics and characteristics of the perpetrators. Then, using that data, it shows a path forward with strategies to help prevent plagiarism among high- and low-profile investigators. Although based on NSF cases, the recommendations are applicable to other federal awarding agencies and investigators. The plagiarism by Magnuson, who resigned as vice chancellor and agreed to a three-year supervisory plan, was in an application to NIH, specifically the National Cancer Institute.
Aliza Sacknovitz, senior investigative scientist in OIG’s Research Integrity and Administrative Investigations Division and author of the report, spoke extensively to RRC about the findings and the agency’s goals for the report. She also answered a question that may have arisen after OIG’s most recent semiannual report (SAR) to Congress was issued: For OIG and NSF, at least, there is no acceptable level of plagiarism.
While fabrication and falsification in research arguably do more damage than plagiarism—though they may bring equal shame on an institution and an investigator (if identified)—plagiarism is far more common, and perhaps more amenable to training and other institutional interventions to prevent and thwart it.
“For this review, we examined 134 plagiarism cases involving 137 subjects against whom NSF made findings of research misconduct for plagiarism,” the new report states. The individuals “were affiliated with 106 unique institutions and their acts of plagiarism occurred in 320 NSF proposals.”
For comparison, a 2017 SAR by NSF OIG to Congress showed that, during the same 10-year period reviewed in the new report, NSF made 30 findings of fabrication/falsification and five that were characterized as “multi,” meaning “an allegation of plagiarism and either fabrication or falsification.”[4]
NSF makes a “single finding” of research misconduct (per individual), “even if we refer multiple allegations to NSF,” OIG explained in a footnote in its April 1, 2017-Sept. 30, 2017, SAR.
OIG chose to focus on plagiarism in this new report because of its higher incidence and the fact that the “larger data set more easily allows for finding commonalities between the cases and to arrive at more meaningful results,” Sacknovitz told RRC. Although data presented goes up to 2017 only, Sacknovitz said there is little reason to think the information is outdated or that “there have been any changes in anything that we reported.”
Producing the report “was labor intensive,” she added. “We had to get files from federal storage facilities. We had to clean the data because the data actually came from more than one database of ours. We had to code the data for the different fields that we were interested in and quality-check the coding. Then we had to do the data analysis and write the paper. We did this all while we also were maintaining our regular caseload. You add an event...outside of our control, like COVID, and you get a report with data that’s not as current as one might like.”
For those unfamiliar with how NSF OIG handles misconduct, under its regulations, “we generally conduct our own inquiries” of allegations and “refer the investigation to the subjects’ institutions, which serve as the NSF grantee[s], when our inquiries do not dispel the allegations,” Sacknovitz explained. “The institution, then, conducts its own independent investigation in line with its own policies and procedures. Upon receipt of an institution’s report, we review it for accuracy and completeness, and we decide whether to accept its conclusion. We can accept an institution’s report in whole or in part. We could request additional information or we can initiate our own independent investigation.”
‘Pattern of Plagiarism’ Common
Among the highlights in the report:
-
“Of the 320 NSF proposals included in our review, 240 (75%) were declined, 57 (18%) were awarded, and 23 were withdrawn or returned without review.
-
“118 subjects (86%) plagiarized from more than 1 source.
-
“104 subjects (76%) plagiarized from papers or proceedings; 59 subjects (43%) plagiarized Internet sources; and 29 subjects (21%) plagiarized others’ proposals.
-
“98 subjects (72%) copied fewer than 200 lines of text, and 73 subjects (53%) copied embedded references.
-
“85 subjects (62%) exhibited a pattern of plagiarism, including 5 who committed additional acts of plagiarism while already under institutional or OIG investigation. Additionally, 83 subjects (61%) plagiarized with a knowing degree of intent, defined as having an awareness of their actions.”
-
Similarly, they generally “plagiarized in multiple NSF proposals; 75 (55%) committed plagiarism in more than 1 NSF proposal, with 1 doing so in 11 proposals.”
-
“50 subjects (37%) committed other acts of misconduct in addition to plagiarism, including data falsification, merit review violations, inappropriate use of NSF funds, and evidence fabrication.”
There may not be a “typical” plagiarist, but it is useful for institutions to know the characteristics common among those with plagiarism findings. NSF’s data show they tended to be “employed in junior academic positions, recent degree recipients, educated in non-U.S. institutions,” and as described before, “committed plagiarism in multiple NSF proposals.”
They were also serial submitters with poor acceptance rates. OIG records revealed that “56 subjects or 41% submitted 16 or more proposals. Almost every subject had at least one NSF proposal declined; 53 subjects or 39% had no award, 52 or 38% had one to five awards. There were 320 NSF proposals in total in our review, and of those, 240 or 75% were declined. Fifty-seven or 18% were awarded, and 23 or 7% were withdrawn or returned without review,” said Sacknovitz. For context, many submitters aren’t funded—NSF’s 2020 fiscal year proposal funding rate was 28%, Sacknovitz said.
Plagiarism May Result in Repayment
OIG reported that, for their part, institutions “accepted referral of research misconduct allegations involving 114 of the 137 subjects in our review,” and OIG itself conducted an investigation in 23 cases. In some instances, the organization may not have a misconduct policy, OIG said.
When an allegation emerges during peer review, this won’t necessarily derail that process. “If an allegation comes in to a program officer, and [they are] in the process of starting the review, the program officer is instructed to bring that allegation to our office and, in essence, forget that any allegation was received,” said Sacknovitz. “Allegations are unsubstantiated rumors. All decisions that program officers make need to be made solely based on NSF criteria. They should not be factoring in anything related to allegations because it is often the case that an allegation that’s received ends up being unsubstantiated. It should not be the case that a proposal is declined because of an allegation of plagiarism.”
On the other hand, “when awarded proposals contain plagiarism, we generally ask cognizant NSF program officers if the specific plagiarized material was so important to the proposals’ overall evaluations that they would not have recommended funding had they known about the plagiarism,” the report explained. “For 9 of the 57 awarded proposals in our review, the program officers said the specific plagiarized material was a critical component in their evaluations and funding recommendations.”
When this is the case, financial consequences may ensue: “NSF recovered funds in six of the cases involving these nine proposals. In the remaining cases, we did not recover funds because the grants were either terminated by the institution, transferred to another PI [principal investigator], or had already closed,” according to the report.
RRC is still awaiting the results of a Freedom of Information Act request filed in December to learn the names and details of institutions that returned funds to NSF during the previous fiscal year. NSF identifies neither investigator nor institution name when making a research misconduct finding, in contrast to the HHS Office of Research Integrity, which publishes both on its website and in the Federal Register.
OIG: ‘Not Bound by Institution’s Report’
Institutional officials who monitor misconduct cases and findings may recall a curious case in OIG’s most recent SAR, in which OIG said university officials “used plagiarism software to review the PI’s recent proposals and publications and found that the similarity indexes for the PI’s proposals exceeded the university’s threshold of 15 percent, while the similarity indexes for his publications were less than the threshold.”[5]
NSF made a finding of plagiarism against the investigator, who later quit rather than be fired. The reference to the threshold was “very interesting,” said Sacknovitz, who was not involved in this case.
But she made it very clear that “we do not consider any level of plagiarism acceptable,” and noted that “we’re not bound by the institution’s report, and we would not apply an institution’s standard to our assessment of the matter.”
After its investigation is complete, OIG forwards recommendations for sanctions to NSF, which has the final say on their imposition.
Among the cases in this 10-year sample of plagiarism, 25 (18%) of individuals were debarred, 66 (48%) were prohibited from advising or reviewing for NSF, 99 (72%) were required to submit assurances to prove the integrity of their data and 128 (93%) to provide certifications to NSF. All individuals were required to take training in the responsible conduct of research (RCR).
As noted, institutions must follow NSF’s requirements regarding a finding, but they may impose some of their own. For example, “of the 114 subjects investigated, 60 were required to take RCR training; 14 were required to teach RCR courses; and 7 were required to conduct RCR training with their students. One subject was required to do all three,” according to the report.
In addition, other actions ranged from termination to salary reductions, as well as bans on submitting funding applications in the future. The report does not indicate how often these actions were taken, but notes that “in 19 cases (14%), subjects left their institutions due to research misconduct investigations or findings.”
Institutions: Perform a Self-Evaluation
To address the weaknesses in applications and resulting lack of funding, OIG suggested that institutions offer “a proposal writing course for inexperienced grant writers that teaches both successful proposals writing skills and research integrity, and require that those new to proposal writing enroll.” They also could create a “mentorship program, in which successful grant recipients serve as mentors to those less experienced,” and ensure that plagiarism detection software is available and used “before submitting internal or external documents.”
A small number in the 10-year sample also took institutionwide actions in the wake of a plagiarism finding, information that isn’t typically made public via OIG’s SAR, but is provided to OIG in investigative reports, Sacknovitz told RRC. Just 22 (16%) did so.
“Most institution-wide actions were related to creating, reviewing and/or modifying RCR policies or training, or purchasing, reviewing, or requiring use of plagiarism detection software by students and/or faculty. Other actions included developing document review procedures and reviewing RCR training records,” according to the report.
“We definitely wish the number were higher” than 16%, Sacknovitz said. “One would hope that all institutions would make some changes after a case of research misconduct, but I really don’t want to criticize those institutions that didn’t” make changes, as “it’s possible that some of the institutions that didn’t make changes already had in place some of the safeguards that other institutions first added after an allegation arose.”
She added that, “rather than criticizing, what I do hope happens is that institutional officials will read this report and think about what changes they now need toward plagiarism prevention. I’d like to see institutions develop and implement strategies and to track what impact those strategies have had, and publish the results and contribute to a communitywide effort to better prevent plagiarism.”
1 Theresa Defino, “Vice Chancellor Blames Pressure for Plagiarism; Resigns After Faculty Pushed for Accountability,” Report on Research Compliance 19, no. 4 (April 2022), https://bit.ly/3sQD0rn.
2 National Science Foundation Office of Inspector General, Observations from NSF Plagiarism Investigations and Strategies to Prevent Plagiarism, OIG I-18-0002-PR, March 4, 2022, https://bit.ly/3NuVaHl.
3 “Reasons Investigators Gave NSF OIG For Plagiarism,* FY 2007-2017,” Report on Research Compliance 19, no. 6 (June 2022).
4 National Science Foundation Office of Inspector General, Semiannual Report to Congress: April 1, 2017 - September 30, 2017, NSF-OIG-SAR-57, accessed May 23, 2022, https://bit.ly/3wMRmvD.
5 Theresa Defino, “Misconduct Tales: ‘Similarity Index,’ Years of Faked Data, Plagiarized Proposals, No Quotes,” Report on Research Compliance 19, no. 1 (January 2022), https://bit.ly/3NxhWxZ.
[View source.]