Perils of the Pedestal |

5 min read

Just as I’m sure was the case for many of my fellow scholars in the field of organizational behavior, I was saddened to hear about the allegations of research misconduct leveled at Francesca Gino, a prolific and award-winning scholar (on the topic of ethics, of all things), by her employer, Harvard Business School.

HBS was first tipped off in 2021 about the possibility of falsified data in four of Gino’s published works by bloggers at Data Colada, a site run by academics who assess behavioral research for data fraud and manipulation. HBS then formed a committee to investigate the allegations with the help of an independent forensics firm. The committee appears to have found enough incriminating evidence to place Gino on unpaid leave, take away her health benefits, and remove her endowed title. The Dean of HBS is now seeking revocation of Gino’s tenure.

Gino maintains her innocence and has filed a $25 million lawsuit against HBS and the Data Colada bloggers. She is alleging gender discrimination, arguing that HBS created a new policy just to investigate the allegations against her, specifically, and then did not follow the requisite steps of gathering input from faculty members and putting the new policy to a faculty vote prior to implementation.

For more background on the allegations and the timeline related to this situation, readers can refer to this New York Times article as well as this website Gino created to defend herself.

The nature of Gino’s claim of gender discrimination reminds me of research showing that ethical failures in organizations are responded to more negatively when the organization is under female relative to male leadership. The same research found the opposite effect for failures of competence; observers responded more negatively when the organization was under male relative to female leadership. This occurs because men are supposed to be competent, according to gender stereotypes, which makes their failings in this arena unacceptable. Meanwhile, women tend to be placed on a pedestal in terms of their presumed warmth and goodness, and so their ethical failings are all the more distressing. Benevolent sexism, it turns out, is not so benevolent.

In claiming gender discrimination, Gino seems to be suggesting that the same alleged misdeeds would not have resulted in as punitive a response as what she received had they been leveled at a male HBS faculty member. The veracity of such a claim will ultimately be up to the court to decide.

Gino further alleges that she received disparate treatment from the bloggers at Data Colada. Gino’s legal team will likely want to know how and why the Data Colada bloggers decide to investigate certain research articles and not others. According to a Wall Street Journal article, the bloggers “use tips, number crunching and gut instincts to uncover deception.” Could this approach present a problem for them if Gino’s legal team uncovers evidence that it has led to disproportionate or more vigorous scrutiny of the scholarship led by women relative to men?

Systemic Problems in Academic Publishing

Gender aside, what’s especially disappointing about this whole affair is that there is a means by which most instances of data fraud and manipulation, which I believe are caused primarily by systemic forces rather than bad apples, could be prevented altogether. Scholars face enormous pressure to publish or perish, and the publishing process favors research that is novel and uncovers effects that are “statistically significant” — a determination based on arbitrary standards. In what is known as the “file drawer problem”, studies that find non-significant or explanation-defying effects often never even see the light of day.

So, many researchers naturally adapt to this incentive structure. They might conduct the same study multiple times, tweaking it until they achieve statistical significance in favor of their hypotheses. Or they will include numerous outcome variables in their methods, but report the findings only for those on which they found a statistically significant effect of their treatment. These specific practices were tacitly accepted until the replication crisis hit. In the most egregious type of data manipulation, which has always been disavowed, a researcher edits their raw data in ways that help them find support for their hypotheses. This is what Gino is accused of doing.

Ongoing efforts to prevent these problems include making raw data public, and requiring the pre-registration of studies, which involves publicly posting a detailed write-up of a research project’s hypotheses, methods, and analytic plan before any data is collected. But both of these processes can be gamed, and they don’t address the problem of there being zero incentives to try to publish those non-significant or unexplainable findings that the review teams at top-ranked journals simply have no interest in.

The best method for preventing research misconduct would involve scholars submitting their research questions, hypotheses, and the methods and analytic techniques they plan to use to a journal in which they wish to publish. If the journal’s review team agrees that the research question is important, the hypotheses make logical sense, and the proposed methods and analytic procedures are appropriate and rigorous, then they commit in advance to publish whatever findings the research team uncovers when they collect the data. And the study can still be pre-registered in advance of data collection.

Why is this method not already in place at every journal? Because the editors and reviewers at behavioral research journals — especially top journals — prefer findings that tell a tidy, consistent, and interesting story about human behavior. Unfortunately, the truth, like people, is messy and complicated. The sooner we accept this, the better off behavioral science will be.

You May Also Like

More From Author

+ There are no comments

Add yours