Description
Authors should be open about the limitations of their work and not overstate its importance.
This collective preprint is an active document intended to encourage reflection on academic writing. It is meant to evolve as a result of continuous input from interested contributors. Everyone is welcome who wants to contribute.
Please cite as: Corneille, Olivier, Beffara, Brice, Carroll, Harriet, Havemann, Jo, Henderson, Emma L., Holmes, Nicholas P., Hussey, Ian, IJzerman, Hans, Lotter, Leon D., Lush, Peter, Orban de Xivry, Jean-Jacques, Outa, Nicholas, & Pilacinski, Artur. (2022). Reflecting on the use of persuasive communication devices in academic writing (2.0). Zenodo. 10.5281/zenodo.6375871.
This collective preprint is an active document intended to encourage reflection on academic writing. It is meant to evolve as a result of continuous input from interested contributors. Everyone is welcome who wants to contribute.
>> https://docs.google.com/document/d/1M6EvloY8Hz5v1gHnX6ibNunalHaADDhDBJvUBV_P1f0/edit#
If science seeks to bring us closer to truth, scientific communication should be characterized by a high level of transparency, precision, and sincerity. However, scientific communication also involves persuading the readership - including editors and reviewers - that one’s research is worthwhile (e.g., is innovative, strong, and consequential). The latter goal may imply the use of persuasive tools that are at risk of misleading readers and reviewers in their assessment of our research, which we believe should be avoided.
In this document, we identify a list of such communication devices. We discuss and cluster them as a result of reflections made on our own writing style, as well as observations made in research articles by other authors. The items are organized along a tentative typology that may be reconsidered at a later stage. We focus on writing styles that apply to the presentation and interpretation of research findings, including data visualization, but generally excluding issues related to methods and statistical analyses.
Our intention with this document is to recognize how difficult it is to effectively and accurately convey one’s data accurately, while at the same time encouraging self-reflection amongst authors (contributing researchers) as well as reviewers and editors on the use and potential misuse of persuasive communication devices in written scholarly reports, so that we as a global scholarly community can uphold highest possible standards to research rigor. We want to emphasize that we do not imply that authors use the below-described communication tools in order to purposefully occlude bad research. Yet, we find it useful to raise awareness on habits that may lead to misinterpretation of research results, both within and outside our scientific community.
Please feel free to make suggestions in THIS LIVE DOCUMENT.
Ignorance: Ignoring previous work that decreases the perceived novelty of the research.
Recommendation: Ignorance may be willful or honest. Both may be avoided by conducting a comprehensive literature search and by discussing relevant work comprehensively. We encourage authors to rely on a meta-analytic mindset, conducting systematic searches relying on comprehensive search terms and such tools as PRISMA flow diagrams, the Rayyan QCRI app for systematic reviews, and Zotero to keep a record of the reviewed literature and/or resources like connectedpapers.com for comprehensive searches. Do not hesitate to contact a librarian from your institution if necessary.
One-sided citation: Citing predominantly or exclusively supportive research, to make the research appear stronger than it is, or to prevent the selection of critical reviewers.
Recommendation: Actively seek out research that challenges or contradicts your claims, including checking for replication attempts. Request feedback from colleagues who may have a broader knowledge of the literature or support competing theoretical accounts. Consider engaging in adversarial collaborations. Submit articles in high TOP factor journals in order to be confronted with journals that are more open to transparent research practices, and/or submit using the Registered Report format for pre-study peer review.
Reliance on weak evidence: Referring to research that has received a lot of attention, yet has proven to be weak or wrong in the meantime (e.g., lack of successful replication; experimental confounds or important moderators identified; alternative accounts supported; underpowered original studies; or even retracted).
Recommendation: Instead of relying on widespread usage, read primary work and work that nuanced it, make up your own mind, and discuss the work in good faith. Review the strength of the evidence and clearly describe the limitations in one’s review. Remind yourself of the risks of “social proof”: just because articles and entire lines of research have attracted a lot of attention does not guarantee they provide higher-quality evidence (whether that is a strong conceptual background, be likely to be replicable, be likely to be generalizable, et cetera).
Misleading use of references: Citing papers in a way that does not fit the original reporting.
Recommendation: Read the papers you cite and make sure not to misrepresent them. Do not rely on how others have reported the findings.
Missing evidence: No reference or access to the underlying primary evidence to be found anywhere in the manuscript that gave rise to the claims made in the article.
Recommendation: Make claims that are warranted by past research and provide reference(s) for it. If you can’t find it, make sure the claim is cautiously stated.
Catchy titles: Using attention grabbing titles that go beyond – and sometimes even contradict - the study results. The risk also applies to the abstract and the main body of an article.
Recommendation: State the study objective and results with sincerity and accuracy. The title may comprise two parts: a short one that catches attention; a second one that provides an accurate description of the research under consideration. Consider including information like sample size and whether the study was pre-registered or not.
Exceeding discussion: Drawing conclusions in the general discussion that go well beyond the scope of the reported work.
Recommendation: Going beyond the research report is welcome. This may include the discussion of avenues for future research or the implications for public policy. However, one should remain cautious in discussing study results, and avoid pretending it delivers more than it does. Special attention should be paid in the discussion - and ideally in title and abstract - to observed or theoretical moderators of the effect (including samples used and the nature of the testing conditions). Consider including a “constraints on generality” statement (Simons, Shoda, & Lindsay, 2017) in your discussion section (and in the title if possible).
Coaxing: Coaxing the narrative with suggestive adjectives (e.g., describing something as striking or remarkable without clear justification for it).
Recommendation: Such adjectives may be used, but with moderation. Writing should remain generally technical rather than appeal to emotions.
Selective reporting: Dropping hypotheses or analyses based on the nature and direction of the results.
Recommendation: Preregister the study. Add a full disclosure statement in the manuscript that confirms all measures collected were reported. Follow reporting guidelines to ensure complete, transparent, and accurate reporting. Even preferable to pre-registration is the publication of the report as a Registered Report, where reviewers agree on the method before data collection with the author, and where the decision to publish is taken before the study is conducted and is therefore results-agnostic. If you are not using selective reporting, let the reader know it and use the 21-word solution (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2160588): “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”
Creating “clean” narratives: Hypothesizing after results are known (HARKing; Kerr, 1998) while presenting the study results as predicted. In addition, it is typically difficult to know whether the analysis is exploratory or hypothesis-driven. This distinction creates confusion as to whether the reported result should be later confirmed as exploratory research aims at generating new but to-be-tested hypotheses or whether the result stems from a specific framework that is being tested. While this distinction is often debated (see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8640268/), it is useful to know what part of the results are hypothesis-driven (confirmatory) and data-driven (exploratory).
Recommendation: same as above. Clear labeling of the results as exploratory or hypothesis driven.
Hang heavy (or “emotional appeal”): Appealing to the importance of one’s research question and the need to “talk more about it” to compensate for the empirical weakness of a study.
Recommendation: A research can be as good as the methods it relies on. Make sure to stress the limitation of the studies (e.g., did you only measure the effect in a single scenario or a very limited population?). Avoid emotional appeals. Favor more neutral or technical writing styles instead.
Overgeneralization in title and/or abstract: This phenomenon is widespread, with authors generalizing beyond their studied population without sufficient evidence for their claim and is particularly prevalent for human adults from WEIRD countries, often US college students (e.g., Cheon et al., 2020). Similarly, some authors omit to refer to the studied animals from the title/abstract as if the reported finding directly generalizes to humans. A title that would look like “evidence for sensory hypersensitivity in autism” makes the readers think that it applies to humans and does not mention that the study was performed in an animal model of the disease. Overgeneralization may also apply to procedures, materials, or testing conditions used.
Recommendation: Specify your sample in your title and your abstract. Consider adding in your “constraints on generality” statement (Simons et al., 2017) an identification and justification of your target population, while indicating the boundaries of the effect and/or clarity where you are overstepping your boundaries by predicting out-of-sample to other populations and/or measures. Mention the studied population in the title and abstract. For instance, “evidence for sensory hypersensitivity in autism” should be replaced by evidence for sensory hypersensitivity in a mice model of autism. Use of constraints on generality statement (https://journals.sagepub.com/doi/10.1177/1745691617708630)
The fallacy of the Law of Small Numbers: This pervasive problem arises when scientists claim to provide strong evidence for an effect based on a small sample size. This is consistent with the idea that if you can demonstrate it with a small sample, it should hold with a larger sample. Yet, this is incorrect. This fallacy is often referred to as the Law of Small Number whereby people believe that small samples reflect the population from which they are drawn. Yet, with small samples, a little bit of noise can have a very large effect on the statistics of the sample. Significant effects found with small samples are therefore often exaggerated because they must have very large effect sizes to become significant due to the limited power of the experiment (effect-size inflation).
Recommendation: Each scientist should learn about the Law of Small Number and refrain from making strong conclusions based on an experiment with a small sample size and one significant p-value. The discussion should reflect the uncertainty that the effect will be replicated.
Inconsistent claims: Making logically inconsistent claims across, and sometimes even within papers, which could please any reader and prevent later critiques.
Recommendation: Remind yourself that doing good science implies the risk of being refuted. Inconsistency or vagueness in claims prevent that goal. Registered reports or adversarial collaborations can reduce the chance for making inconsistent claims across one’s papers.
Selective quotation: Selectively quoting, or quoting out of context, another author to make one’s point.
Recommendation: Read papers you cite in their entirety whenever you quote them, so that you are confident you are not mischaracterizing the original authors’ intentions.
Strawman argument: Pretending to refute claims that no one has ever made or comparing the proposed model to another model that nobody believes to show the superiority of the proposed model. This device weakens the perceived competence of scientific opponents by implying they are poor reasoners or reason in bad faith (Aikin & Casey, 2022).
Recommendation: Contact your “opposing” authors in good faith and have your claims double-checked. Recommend reviewers that are likely to oppose you; do so with journals that engage in transparent peer review.
Self-inflating and obscure (sometimes coined “Bullshit”) writing: Making the reader feel humbled or in awe by relying on cryptic terminology, numerous abbreviations, or writing that sounds “smart” (see research on academic bullshit, e.g., Smagorinsky et al., 2010).
Recommendation: Keep the writing clear and refutable. When writing and rewriting, take Einstein’s advice and aim to state the idea as simple as possible, “and no simpler”. Rewrite your sentence, cut words where necessary, and make your language as simple as you possibly can. Avoid the use of abbreviations as much as possible as these increase the mental workload for the reader. Consider, for instance, this example from Garner’s Modern American Usage: “One of the most important forms mentioned in the rescript is the unification of the organization of judicial institutions and the guarantee for all the tribunals of the independence necessary for securing to all classes of the community equality before the law”, which can be rewritten as “Among the most important reforms is to unify the courts so as to guarantee their independence and the equality of all people before the law”.
Pragmatic inferences: Capitalizing on communication pragmatics to elicit flawed inferences. For instance, introducing an article with an outstanding research question that is actually not addressed in the research.
Recommendation: Pay attention to the risk of having the readers draw undue inferences in your writing.
Delayed limitations: Postponing to the limitation section major issues that would have justified not doing the study in the first place (e.g., “Admittedly, important concerns have been raised about the validity of our main measure”).
Recommendation: Carefully consider (before carrying out the study) and describe (in the methods section) the psychometric properties of the measures (e.g., sensitivity, reliability, and validity).
Overwhelming and untidy supplementals: Overwhelming the readers with extensive or untidy supplementary materials - possibly, to prevent close scrutiny.
Recommendation: Keep the manuscript focused on your research question(s). Separate your results section into confirmatory analyses (i.e., hypothesis testing) and exploratory analyses (i.e., hypothesis generating). Number each hypothesis (H1, H2, etc.) and use this suffix throughout the text so that the claim can be followed through to conclusions. Save relevant supplementary materials in an online repository and signpost them with your paper. Divide tasks in your team, and have one author take care of the supplementary materials. Ask an outside researcher to review your results and supplemental materials.
Misleading visualizations: Using visualizations that “hide“ or gloss over information on purpose, not showing visualizations where one would have expected them, or moving important visualizations to ‘Supplementary Materials’. Examples: using bar plots instead of visualization methods that convey more information like box, violin-like or raincloud plots; not showing individual data points in small samples; minimize error by displaying inappropriate error bars ; misleading scaling of the y-axis especially in presentation of percentages (i.e., bars that do not start at zero leading to visual overemphasis of differences); not showing scatter-plots when performing correlation analyses in small samples, potentially omitting the fact that associations might be outlier-driven; setting a time range that suggests an important change that otherwise appears small or opposite in its broader time context.
Recommendation: Make sure your visualization offers a fair and accurate description of your study findings. If you feel you have to “play around” with visualization to hide major issues with the findings, do not make your work public. Improve it instead.
Use of augmenting words: Relying on a terminology that suggests more than what the study delivers, or that prevents refuting a claim, like implying causality (e.g., by using words like “impact”, “drive”, “influence”), without explicitly saying it (i.e., “cause”), allowing you to deny that you are claiming any causal link when pushed. Besides the case of implied causality, researchers may also be tempted to make statements that are literally true but imply more than what is literally said. For instance, “Our findings are consistent with Y” may be true but concluding “therefore, Y” works only if one has also ruled out competing accounts for the findings. If this appears as a speculation, it is fine. But if it is not flagged as a hypothesis or speculation, or something similar, it implies that the statement is true, not merely “may be true.”
Recommendation: Make sure your writing is precise and does not oversell the study. Keep in mind that science implies making refutable statements, and write accordingly. Ideally write down formalized predictions in one’s discussion including whether your claims should be taken as causal or not.
Selective appeal for rigor: This occurs when critiques of a position, or competing positions, are held to a higher bar than the original one. For example, skepticism of replication studies has often emerged on the groundless basis that they were run more poorly than the original study, despite them typically having larger sample sizes, open data and materials, preregistration, etc. Similarly, papers reporting null results are often held at a higher standard (e.g. requiring a larger sample size) than the ones reporting positive (significant) results.
Recommendation: Be fair in your discussion of competing accounts. Request feedback from colleagues who share different theoretical views, engage in adversarial collaborations. Run your study as a Registered Report, so you can test alternative explanations and agree with authors who have opposing views before collecting your data.
Decoy: Drawing attention on relatively minor and easily addressable limitations of the study, while neglecting major ones (e.g., a lack of control condition that could have refuted the effect claimed). This may result in leading peer-reviewers and readers into believing that authors openly acknowledge weaknesses of their work, while, in reality, severe limitations still hold true.
Recommendation: Try to get opinion from other people in the process of designing, conducting and writing up your study. Share the first draft with people who can provide you with an objective opinion on the caveats. Try to openly acknowledge these comments in your manuscript. Share your work as preprints and discuss it with people
Reliance on precedent: Suggesting that because procedures (e.g., measurement, design, or sample size) have been heavily relied on in previous work, they don’t need to be justified anymore.
Recommendation: Justify all methodological procedures. Highlight limitations (including any pragmatic constraints, for example, limits to sample size based on time or funding) and areas of uncertainty.
Reliance on citations: Pointing to large citation rates to imply the quality of a research or even of a whole research program (clearly, the two should not be conflated).
Recommendation: Describe the qualities of a study on its own merits (e.g., conceptual background, validity of the measures, sample size, et cetera).
Fluency effects: Referring to famous notions, theories, or researchers to make the readers feel safe as they navigate the article, and so make the article feel “true” despite these notions being problematic or these theories and researchers having been proven wrong.
Recommendation: Review the literature carefully for studies refuting your central claims (see also, “Ignorance”).
Open Science washing: Using superficial “open science” practices in order to boost the perceived robustness of the results.
Recommendation: Think about what you are trying to achieve by using an open science practice, and select practices based on the challenges of your particular research, rather than taking a tick box approach. Focus on quality rather than quantity. For example, if you are sharing data, ensure the data is FAIR (findable, accessible, interoperable and reusable). Share all relevant data (within ethical and legal constraints), and include a README file. Preferably rely on formal peer review by selecting the Registered Reports format to ensure that your research plan is complete, rather than a superficial hypothesis that allows for a large degree of flexibility.
Knowledge misappropriation: Not acknowledging contributions made by non-scholars, ECRs, software designers, indigenous communities, etc. to make it seem as if more work came from the listed authors. Keeping the number of contributing authors low may raise the profile of the listed authors.
Recommendation: Acknowledge all contributions made to a research project described in a manuscript. For now, the best way to credit contributions is through the CRediT taxonomy, see https://credit.niso.org/. Given that the CRediT taxonomy was originally developed in biochemistry, more applicable models per discipline can be developed to better recognize individual contributions (e.g., translation and cultural adaptation in psychology). Refer to the authorship policy of your university if necessary and look for a mediator if the dispute cannot be resolved.
Gift authorship: Adding the names of accomplished professors to the authors' list to increase the chances of the manuscript being accepted, which can increase the probability of disseminating flawed scientific work (as compared to the same work without these names), or, conversely, when not including the “accomplished professor”, decreasing the probability of disseminating relevant and robust work.
Recommendation: Authorship must be granted based on genuine contribution. See how to credit individual authors “Knowledge misappropriation”.
(cited in the document via Zotero)
Aikin, S., & Casey, J. (2022). Straw man arguments: A study in fallacy theory. Bloomsbury Publishing.
Cheon, B. K., Melani, I., & Hong, Y. Y. (2020). How USA-centric is psychology? An archival study of implicit assumptions of generalizability of findings to human nature based on origins of study samples. Social Psychological and Personality Science, 11(7), 928-937.
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196-217.
Lewis Jr, N. A., & Wai, J. (2021). Communicating what we know and what isn’t so: Science communication in psychology. Perspectives on Psychological Science, 16(6), 1242-1254.
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12(6), 1123-1128.
Smagorinsky, P., Daigle, E. A., O'Donnell-Allen, C., & Bynum, S. (2010). Bullshit in academic writing: A protocol analysis of a high school senior's process of interpreting Much Ado About Nothing. Research in the Teaching of English, 368-405.
Beyond differences in means: robust graphical methods to compare two groups in neuroscience https://onlinelibrary.wiley.com/doi/full/10.1111/ejn.13610
Contributors
Olivier Corneille, UCLouvain, Belgium, ORCID: 0000-0003-4005-4372, Twitter: @opatcorneille
Brice Beffara, Nantes Université, Univ Angers, France, ORCID: 0000-0002-0586-6650, Twitter: @brice_beffara
Harriet Carroll, Lund University, Sweden, University of Aberdeen, UK, NHS Grampian, UK, ORCID: 0000-0002-4998-4675, Twitter: @angryhacademic
Jo Havemann, Access 2 Perspectives, Germany, ORCID: 0000-0002-6157-1494, Twitter: @openscicomm
Emma L. Henderson, University of Surrey, UK, ORCID: 0000-0002-5396-2321, Twitter: @EmmaHendersonRR
Nicholas P. Holmes, University of Nottingham, UK, ORCID: 0000-0001-9268-4179, Twitter: @TheHandLabIan Hussey, Ruhr University Bochum, Germany. ORCID: 0000-0001-8906-7559, Twitter: @ianhussey
Hans IJzerman, Université Grenoble Alpes, Grenoble, France & Institut Universitaire de France, Paris, France, ORCID: 0000-0002-0990-2276, Twitter: @hansijzerman
Lee Jussim, Psychology, Rutgers University, USA, Twitter: @psychrabble
Leon D. Lotter, Research Center Jülich, Germany, ORCID: 0000-0002-2337-6073, Twitter: @LeonDLotter
Peter Lush, University of Sussex, UK, ORCID: 0000-0002-0402-1699, Twitter: @PeterLush4
Jean-Jacques Orban de Xivry, KU Leuven Belgium, ORCID: 0000-0002-4603-7939, Twitter: @jjodx
Nicholas Outa, Maseno University, Kenya, ORCID: 0000-0002-4085-0398, Twitter: @nichouta
Artur Pilacinski, Ruhr-University Bochum, ORCID: 0000-0002-3816-4372, Twitter: @fatresearchcat
Corresponding authors: Oliver Corneille, [email protected] & Jo Havemann, [email protected]
Acknowledgments: We thank all commenters on Twitter and suggestions via e-mail that reached us, a.o. from Dr. Iain Johnston (ORCID iD: 0000-0001-8559-3519).
Original Twitter Thread: https://twitter.com/opatcorneille/status/1459432305865465858
Contributions according to Contributor Roles Taxonomy (CRediT)
Conceptualisation and writing original draft: OC
Writing - review & editing: JH, HC, NO, HC, LDL, ELH, NPH, PL