The Inability of Jurors to Self-Diagnose Bias

By Christopher T. Robertson, David Yokum, & Matt Palmer
An adapted extract from the full article, originally published in 96 Denver Law Review 869 (2019).

Whether criminal or civil, litigants are guaranteed the right to an impartial jury. The voir dire process engages jurors in a colloquy to screen panels of potential jurors and remove unduly biased persons. When a juror indicates some feeling or opinion about the litigation, attorneys, facts, or law of the case, the standard practice is to ask the juror if they can set aside that feeling or opinion, and apply the law as instructed.  In short, “can you be fair and impartial”? This is sometimes called “the magic question,” because when a juror reports that he or she can be fair and impartial, then it is highly unlikely that a litigant’s challenge for cause will be successful. We sought to explore whether this colloquy in fact provides useful information to the judge trying to impanel an impartial jury.

Psychology Behind Self-Diagnosing Bias

A wide variety of biases and prejudices—from racial animus against the plaintiff to professional affiliations with the defendant—might inflict jurors. For any of these biases, the self-diagnosis task that the court asks of jurors can be usefully analyzed as part of a more general process that psychologists call “mental contamination.” Scholars have explained that, to succeed in eradicating such contamination, a person must: (1) be aware that mental contamination exists; (2) be motivated to correct the bias; (3) be aware of the direction and magnitude of the bias; and (4) be able to adjust his or her response.

Consider this process when, for example, a potential juror has been exposed to pretrial publicity (“PTP”) that portrays the defendant negatively based on facts that would be inadmissible at trial. To suppress the biasing influence of this information, the potential juror must first realize the exposure to pretrial publicity has affected his or her judgment, despite a large body of psychological research that people are often ignorant that a bias exists. A step-two failure of motivation would occur if the juror were indifferent about his or her role as a juror, or simply disagreed with the judicial instructions that the impact of pretrial publicity constitutes a bias worthy of eradication. Even if aware and motivated, it is highly unlikely that a juror would have insight into how much bias the PTP exposure caused, a failure of step three; people usually undercompensate or overcompensate for a perceived bias. A failure at the final (fourth) step entails an inability to correct for a bias that is precisely known and unwanted; that is, psychological research suggests some bells cannot be unrung. However, research on the “bias blind spot” indicates that, in general, people are better at diagnosing bias in others than in themselves.

Ultimately, the magic question asks the juror to predict and then accurately report his or her likelihood of accomplishing those tasks. That sort of diagnosis is hard enough, but it’s even harder to diagnose one’s own bias. A juror, despite accurately self-diagnosing that he or she cannot overcome a lingering bias, might nonetheless publicly insist that he or she will act impartially, a phenomenon known as “social desirability” bias. The juror adheres to the social norms of being a good and fair person, that is, of upholding a civic responsibility as any decent citizen would.

Some attorneys or judges may even exploit this effect, “rehabilitating” or browbeating jurors into saying they can be fair, even if this further undermines the real diagnosticity of the colloquy. In short, despite an accurate self-diagnosis of bias, the juror may also earnestly hope that he or she can set aside the lingering bias and serve as a fair and impartial juror.

Our Experiments

To test the assumption that juror self-diagnosis is accurate—an empirical claim—we ask whether a juror, whose impartiality has been questioned due to exposure to a potentially biasing factor, can nonetheless decide the case the same as if he or she had never been exposed to the biasing factor. We also test to see whether other-diagnosis, unlike self-diagnosis, might provide a viable voir dire tool, given that people diagnose bias in others better than in themselves.

We implemented this analytic strategy empirically by way of a between-subjects mock jury design on a civil case with two steps: one experimental and one statistical. First, we deliberately biased half of the subjects through exposure to pretrial publicity: subjects in the “control” group read an irrelevant article prior to trial, and subjects in the “treatment” group read an inadmissible article that portrayed the defendant very negatively. Second, we stimulated voir dire screening by asking subjects a series of questions, drawn directly from the U.S. Supreme Court’s guidance, about whether they could be impartial. We then removed from the data set subjects who conceded bias. Those who insisted they were fair and impartial—those who “pass” the magic question screen—are fit for jury duty. If jurors are able to self-diagnose accurately, then the effect of statistically removing those jurors who conceded bias would be to remove the biased verdicts.

Subjects then watched a thirty-two-minute medical malpractice trial video, which include all the key elements of the case, including a judge’s welcome, opening statements, direct and cross of two experts, closing statements, and realistic jury instructions from the judge. After watching the trial, subjects rendered individual judgments, responding “yes” or “no” to whether the plaintiff had proved by the greater weight of the evidence that the defendant committed medical negligence. Finally, subjects who found negligence also awarded noneconomic damages for pain and suffering.

Later we ran additional experiments to test whether asking jurors to diagnose bias in other jurors (rather than themselves) might improve the accuracy of voir dire, using the same stimuli and analytic approach as in the experiments on self-diagnosis. The only difference is that we added an extra experimental variable, namely, after subjects were exposed to pretrial publicity, they were asked whether other jurors, if exposed to the same circumstances as the subject, would be biased. Specifically: “I understand that you will do your best to be fair and impartial, but do you think that there is a significant risk that other jurors, exposed to the article you read, would be prevented from impartially considering the evidence at trial?”; and (2) “Do you think those other jurors, exposed to the article you read, would base their verdict only on the evidence at trial?”

Our Results

Exposure to prejudicial pretrial publicity did, as expected, significantly bias jurors. It increased verdicts against the defendant and increasing damages for pain and suffering compared to jurors in the control condition.

This finding set the stage for our real inquiry, which looks back at the pre-trial voir dire screening questions. The vast majority of subjects denied bias and instead expressed a certainty that they would be able to impartially consider only the evidence presented at trial.

Notably, subjects were equally likely to admit bias (or not) regardless of whether they read irrelevant or prejudicial PTP, suggesting that the jurors responses were not tracking the prejudicial nature of the underlying exposure. Even worse, the responses did not track the future performance of the jurors themselves. Subjects who admitted that they were likely to be biased were in fact equally likely to impose liability after seeing the trial. The failure of the magic question colloquy was also apparent when examining the amount of pain and suffering awards, which increased dramatically from the control condition to the treatment condition, even after screening out those who confessed bias.

In our subsequent experiments testing the “other-diagnosis” colloquy, the mock jurors were very willing to concede that another person would be biased by the negative pretrial publicity they had read, while simultaneously insisting that they themselves, in contrast, would not be so affected. This result is promising to suggest that other-diagnosis overcomes some of the social-desirability biases, which degrade diagnosticity. Whereas self-diagnoses do not screen enough biased jurors, other-diagnoses seemingly screened too many, perhaps to the point of being unworkable for court dockets. As one part of the colloquy however, litigants and jurors may want to use this approach to improve the rate of disqualification.

Much more work, however, needs to be done to determine whether “other-diagnosis” actually solves the problem. Even in the unexposed control condition, subjects were also more likely to attribute possible bias to another person, and we were unable to confirm that it actually improved diagnosticity.

Discussion

Our study had several limitations, which we explore in the full article. Two can be emphasized here: First, our study did not test whether and to what extent jurors are biased from sources other than pretrial publicity, and whether they may be better able to self-diagnose those biases. But we have no reason to expect different outcomes in other domains. Second, it is possible that real jurors in real courthouses are somehow better able to diagnose their own biases (or they may be even more subject to social-desirability biases, which undermine diagnosticity).

The courts of appeal generally say that they will defer to trial court determinations as to whether a juror can be impartial, as long as those determinations are based on “substantial evidence.” This study has shown that the jurors’ self-diagnoses of bias do not provide any evidence regarding their actual impartiality.

We believe that trial courts should not rely upon such unreliable answers. One alternative may be to simply exclude jurors with prima facie evidence of potential bias – if they saw the prejudicial news item or have previously suffered this sort of injury or know a litigant, just disqualify them. Some judges now take the philosophy that “it is not whether you can be a good juror; the question is whether this is the right trial for you.”

In sum, our study undermines a longstanding and ubiquitous practice of the state and federal courts. Although further research is warranted, it is now fair to put the burden on those who rely upon this particular method of diagnosing bias to show that such reliance is reasonable. One can no longer simply assume that it is.

The full paper can be found here.

***

Christopher Robertson, JD, PhD is a professor and associate dean at University of Arizona , and co-founder of Hugo Analytics, a firm that provides scientific litigation insights. He has a 2019 book coming out: Exposed: Why our Health Insurance is Incomplete and What Can be Done About It.

David Yokum, JD, PhD is Director of The Policy Lab at Brown University, where he leads a wide portfolio of work leveraging scientific methods to improve public policy and operations. He is also a co-founder of Hugo Analytics, and a leader of its JuryMeter project.

Matt Palmer, JD, MA leads Palmer Law Firm, PLC in Phoenix Arizona.

Editor’s Note: This article was previously published in our December newsletter.

Top