When scientific evidence is in play, it is often assumed that the most frequently cited articles are published in the most elite journals. However, this is not necessarily true. A journal is ranked according to its impact factor, a quantitative measure indicating the number of times that papers in that journal are cited during a specific year. Junk science can be found in highly regarded peer-reviewed journals as well as lesser known journals. The intent of this post, the second in our series on junk science, is to draw attention to the growing problem of flawed scientific articles being published and to highlight potential red flags to look for when reading and relying on published studies. The number of retractions issued for scientific articles continues to rise each year with more than 10,000 research papers being retracted in 2023. Unfortunately, in some cases, problematic studies may not be retracted for many years and may be cited or referenced extensively by other authors. Therefore, any scientific study, despite the journal in which it is published, should be carefully evaluated to assess its validity. This is especially true when studies are selected as reliance material for expert testimony in litigation matters. The Federal Rules of Evidence and Daubert require that expert witnesses base their testimony on facts or data produced through the use of trusted principles and methods. Expert witnesses are also required to express opinions that reflect “reliable application of the principles and methods to the facts of the case.”
Although unintentional mistakes can occur when scientific data is reported, junk science may also be the result of unethical behavior. Some of the more commonly reported causes of junk science include improper analyses, intentional or unintentional biases, and unethical practices such as image duplications or manipulations. When evaluating a scientific study as potential reliance material, the reader must closely evaluate the study design, i.e., methods and analytical tools, interpretation of results, conclusions, and the credentials, along with any potential conflicts of interest of the authors. Below are five key points to keep in mind.
- All methods in scientific studies have strengths and weaknesses. In epidemiological studies, it is critical that an adequate sample size be used to represent the population and to reliably determine the presence or absence of a particular effect. Every study should include a proper control population for comparison against the test population. It important to take into account whether the control and test groups were limited to certain populations, e.g., age, racial, or ethnic groups.
- Data may be analyzed and interpreted in different ways. For example, investigators may select only certain data points or datasets while ignoring data that contradict their hypothesis. The reader should confirm whether the results reported in the text of a paper match the data shown in figures, tables, charts, and supplemental files, and that there are no signs of image manipulation, e.g., an image and its mirror image being presented as separate data. Other things to look for in scientific literature are potential misrepresentations of the data in the conclusions and whether a causal effect can be determined based on the data, i.e., does the study clearly demonstrate that A caused B, or could it be that B caused A?
- Any “tortured phrases,” i.e., strange phrases being used in place of well-established phrases, or potential indicators of the use of artificial intelligence in writing, e.g., excessive repetition of words, odd sentences, or unrealistic scenarios, should be considered when determining the credibility of a paper.
- The validity of statements, based on cited sources and references for those sources, should be confirmed before being relied upon.
- A key step in evaluating potential bias is to assess potential conflicts, i.e., funding sources for the study and affiliations of the author(s) with organizations or companies that may have influenced the interpretation of the data. Previously published literature and presentations by the author(s) should also be considered. A retracted article by an author is a red flag.
Prevention is better than cure. By carefully evaluating the scientific literature and selecting only reliable studies, attorneys and expert witnesses can avoid their reliance material being thrown out by the judge. Blocking junk science from the courtroom reduces wasted time and resources and helps ensure verdicts are based on the best evidence.
This article was authored by Gloria Malpass, Ph.D. (Research Consultant) and Suzanne Pierberg (Research Analyst).