Looking for the spin lurking in medical research articles

Dr Thorlene Egerton

writer

Dr Thorlene Egerton

Senior Lecturer, Physiotherapy Department; University of Melbourne

Claim CPD for this activity

Educational Activities (EA)
0 minutes

These are activities that expand general practice knowledge, skills and attitudes, related to your scope of practice.

Reviewing Performance (RP)
0 minutes

These are activities that require reflection on feedback about your work.

Measuring Outcomes (MO)
0 minutes

These are activities that use your work data to ensure quality results.

EA
0 minutes

These are activities that expand general practice knowledge, skills and attitudes, related to your scope of practice.

RP
0 minutes

These are activities that require reflection on feedback about your work.

MO
0 minutes

These are activities that use your work data to ensure quality results.

 

Four key reasons for spin in research articles—and how to spot the signs

Medical research articles serve as critical platforms for disseminating new knowledge and advancing healthcare practices. They play a pivotal role in shaping clinical decision-making, influencing healthcare policies, and guiding medical professionals worldwide.

However, research articles, even those published in high-quality medical journals, are not immune to reporting biases, or spin.

Bias and errors in the findings can occur because of the study design. These are often called threats to internal validity and can be assessed by reviewers and readers using methodological quality or risk of bias rating scales. In this article we don’t explore these methodological or design sources of bias, but rather we focus on the human errors that can creep in during the article write up. In other words, the types, and the reasons for, spin in research articles.

Research data can’t speak for itself. Researchers interpret and report study findings and conclusions—but they’re only humans.

How they write up their data and draw conclusions is somewhat subjective and influenced by personal and environmental factors. Most spin is unintentional, or at least without any malicious intent. Only very occasionally do researchers deliberately and knowingly do the wrong thing. The journal publication process, including peer-review, is an important safeguard and should help reduce spin, but with increasing competition between journals for quick turn-arounds, plus increasingly unrealistic demands on good peer-reviewers, corners may be cut— and readers should beware.

In this article, we’ll outline four key reasons for spin, and what the resulting spin might look like.

Reason #1: The desire to get published

‘Publication bias’ is common, yet often unnoticed. It occurs because research studies with positive or significant results are more likely to be published than those with negative or inconclusive findings. Articles with positive findings are also more likely to be cited in other articles, and more likely to receive media attention. Journal editors, peer-reviewers, media, and clinicians will perceive an article as being more ‘interesting’ if it has positive findings.

So even if the research question was important and the study well conducted, negative findings can make the article difficult to publish and receive less attention.

Researcher careers are on the line – success with grant funding, career progression and even just being able to keep a job, is to a large extent based on publication track record. This leads to a strong motivation for researchers to get their studies published quickly and in highly-ranked journals, and attract as much attention as possible in the process.

In a well-designed study we conducted on a multimodal intervention to treat the pain from hip osteoarthritis, the treatment turned out to be no better than the placebo. We had a lot of difficulty finding a targeted journal (i.e., arthritis or physiotherapy) to publish our findings. Editors kept saying it was a well-conducted and well-written article, but unfortunately, they had decided to reject. (In this case we did eventually get it published in a good generic medical journal, but such a positive result is the exception.)

What to look for

One way to improve the chances of getting your article published, therefore, is to make the findings sound a little more ‘interesting’ than they actually are. The author might choose words and language that create a feeling of excitement.

Beware of language such as “This is the first study to show …” or “These findings are critical to …”.

Authors may also strategically leave out some of the nuance or caveats to their conclusions. By not explaining major limitations as part of their conclusion, they are misrepresenting the finding’s importance.

Another way to sound more ‘interesting’ is to put more emphasis on secondary outcomes when the primary outcome suggests the intervention was not worthwhile.

The primary outcome should be selected on the basis that it is the most clinically important outcome, and it should be declared up front in a published protocol.

Never-the-less, authors can gloss over their main negative finding and highlight a secondary finding that had a positive outcome.

Worse still, is when the researchers try to find a positive result by re-analysing subgroups of the participants.

These analyses should be considered very exploratory, unless the study was purposefully designed to test differences between sub-groups. The findings from secondary outcomes or sub-group analyses should not be the bottom-line conclusion that is emphasised in the title, abstract or final paragraph.

Reason #2: Financial conflicts of interest

Financial conflicts of interest can all too easily influence the reporting of research. Not only is research itself costly to conduct, but companies or commercial entities with staff to pay and shareholders to please may be susceptible to reporting findings in such a way that favours their products or services.

Researchers decide what to write or not write about. When vested interests are involved, their conclusions can become deliberately selective.

In practice, more research is carried out on interventions that are linked to profit-making businesses (e.g., pharmaceuticals or medical devices) than on interventions that are low cost or publicly funded (e.g., exercise or education interventions), and this is a problem.

In a systematic review we are conducting on treatments for endometriosis, there are almost no studies on low cost or lifestyle options, which means our conclusions and recommendations will largely ignore these treatments as options, since there are no findings to report.

Reason #3: Not being open to the unexpected

We are all influenced by confirmation biases when interpreting research findings and consuming research articles. Confirmation bias refers to the tendency to favour information that confirms pre-existing expectations, and disregard or discount alternatives.

Researchers, consciously or unconsciously, may write the findings in a way that tries to confirm their original hypothesis or beliefs about what the study ‘should’ have found. If the findings are not what the researcher wanted, they may go to great lengths to highlight the potential sources of error in the data or the problems they had delivering the interventions during the trial.

Yet, researchers are more likely to gloss over such problems when the findings confirm their expectations.

An example of this is when a statistically significant but clinically unimportant difference is found, the authors might be tempted to claim the intervention is effective, when in fact, from a clinical perspective, there was no important benefit.

What to look for

Beware the report that says there was a trend towards a benefit. This still means the result was statistically insignificant and the effect size was probably very small.

Also be on the lookout for statements such as ‘the outcome was improved in the treatment group but did not reach statistical significance’.

If it did not reach statistical significance, then they can’t claim there was a benefit.

Of course, when we found our brilliant hip osteoarthritis treatment was no better than the placebo, we spent hours discussing the possible reasons why. But in the end, the treatment just didn’t work any better than time and the contextual factors of the placebo treatment (1).

Researchers aren’t the only ones who fall into this trap

Readers themselves may let their preconceived ideas and expectations determine which articles they choose to read. Findings that go against the grain can be easily disregarded by the medical community initially, and it can take a long time before practice changes. Turning the ship around can be a slow and challenging process.

A recent study on platelet rich plasma for knee osteoarthritis was subjected to a large amount of criticism when it did not show the expected positive benefits, even though it was one of the most well-designed studies conducted on this topic.

People will easily accept findings they expect, but they will scrutinise and criticise findings that require a shift in their thinking.

We are all subject to this resistance to changing our beliefs. I know I find myself questioning study findings I don’t like, and have to consciously open my mind to the possibility that my expectations were wrong.

Reason #4: Context, context, context

Humans are an incredibly diverse bunch and research that assumes we are all the same is bound to get things at least a bit wrong. Researchers who assume that their sample represents all of us, and that the intervention will work the same in every setting, are holding back the progress of medical knowledge and potentially perpetuating healthcare disparities.

What to look for

It is good practice to check the recruitment methods and inclusion criteria to gain an understanding of the types of people the findings are related to – and by extension, the types of people the research study missed.

This does not mean you should ‘throw the baby out with the bathwater’, or that you should conclude your patients are so different that the findings don’t apply. But it means you should read articles with one eye on who received the treatment, and who didn’t. Acknowledging the limitations due to the sample characteristics is often missing from research reports. Treatments may work differently for different people, and the research findings should just be a foundation for your journey to optimising the health of the individuals under your care.

The bottom line

Even in well-designed and conducted studies, the data can’t speak for itself, and fallible humans are left to write-up and report the findings. While journal editors and peer-reviewers do a great job at facilitating integrity in research, it is still up to you, the dear reader, to read between the lines. By playing an active, rather than passive, role as a reader we can hope for a more objective and comprehensive understanding of new medical knowledge. And in doing so, we can foster both evidence-based healthcare, and support the advancement of medicine.

Icon 2

NEXT LIVE Webcast

:
Days
:
Hours
:
Minutes
Seconds
Prof Peter Wong

Prof Peter Wong

Fracture Prevention and Osteoporosis Management After Menopause

Dr Richard Symes

Dr Richard Symes

Ophthalmology Update: New Treatments for Old Conditions

Prof Bu Yeap

Prof Bu Yeap

Testosterone for Men – Common Myths and Recent Development

Dr Victoria Hayes

Dr Victoria Hayes

Conversation Strategies for Unfunded Vaccinations

Join us for the next free webcast for GPs and healthcare professionals

High quality lectures delivered by leading independent experts

Share this

Share this

Dr Thorlene Egerton

writer

Dr Thorlene Egerton

Senior Lecturer, Physiotherapy Department; University of Melbourne

Test your knowledge

Recent articles

Latest GP poll

In general, do you support allowing non-GPs to refer to specialists in certain situations?

Yes, if the referral process involves meaningful collaboration with GPs

0%

Yes

0%

No

0%

Recent podcasts

Listen to expert interviews.
Click to open in a new tab

Find your area of interest

Once you confirm you’ve read this article you can complete a Patient Case Review to earn 0.5 hours CPD in the Reviewing Performance (RP) category.

Select ‘Confirm & learn‘ when you have read this article in its entirety and you will be taken to begin your Patient Case Review.