Ten Top Tips for Reviewing Statistics: A Guide for Ecologists

post provided by Dr Mark Brewer.

Mark is a statistician with Biomathematics & Statistics Scotland, based in Aberdeen. His main statistical research interests are Species Distribution Modelling, Compositional Data Analysis, Bayesian Mixture Modelling and Bayesian Ordinal Regression. Mark was one of the presenters at the UK half of the Methods in Ecology and Evolution 5th Anniversary Symposium in April. You can watch his talk, ‘Model Selection and the Cult of AIC’ here.

The level of statistical analysis in ecology journals is far higher than in most other disciplines. Ecological journals lead the way in the development of statistical methodology, necessitated by challenging practical problems involving complex data sets. As a statistician, publishing also in hydrology, soil science, social science and forensic science journals, I’ve found papers in those areas are much more likely to only use well-established methods than papers in ecology.

Here’s the big question though: why then do I have the most difficulty with ecological journals when it comes to statistical analyses? Let’s be clear here: when I say “difficulty”, I mean I receive reviews which are just plain wrong. Most statisticians I’ve spoken to who work in ecology have anecdotes from reviews which demonstrate a lack of understanding by the non-statistician reviewer (including the all-too-frequent “perhaps you should consult a statistician”). So, why the apparent disconnect?

The difference seems to be in how non-statisticians in different disciplines treat the statistics in a paper. In many subject areas, reviewers are almost deferential to the statistical analysis; in ecology, reviewers can be forthright in their condemnation, often without justification. Reviewers have every right to question the statistical analysis in a paper, but the authors have the exact same right to expect a high quality review from a genuine expert in the field. Has ecology become blasé about statistics?

To address this, I would like to offer the following suggestions to ecologists reviewing the statistical content of papers in ecology:

1. Be Honest about What You Know and What You Don’t

I know relatively little about statistical community analysis; the experts I know are ecologists, not statisticians. So I’d be cautious about claiming too much expertise in the area, and I’d say so in the review. In one case, we had a reviewer (who we assume was an ecologist rather than a statistician) tell us we should have used a mixed model; given that there were no factors in the data set, we had no idea how this was even going to be possible! When I’m reviewing a paper in an ecological journal, at the beginning I state clearly that I am a statistician, and that I will be focussing on the stats – I’ve worked with ecologists for a number of years now, but I’m no ecologist. Making a clear early statement about your areas of expertise helps the editors with their decisions and the authors with their responses to your recommendations.

2. Old Doesn’t Mean Bad

I’ve lost count of the number of times I’ve been told the method I’ve used is “out-of-date” or “old-fashioned”. It doesn’t matter that, say, logistic regression was formalised over 50 years ago – it is still a perfectly valid method. Age should never be used to criticise a method per se.

Old doesn’t mean bad : The face of modern, Bayesian statistical methodology.
Old doesn’t mean bad: The face of modern, Bayesian statistical methodology.

3. New Doesn’t Mean Good

The counterpoint to old doesn’t mean bad. That fancy new method might make fewer assumptions, but be harder to fit, so be wary of insisting it be used. It might be more data-hungry, meaning that it’s not feasible for the data set in the paper; this can manifest itself in things like massively inflated standard errors caused by problems with numerical optimisation. These are things to look for, as well as the more obvious errors; for example I’ve seen people use complex non-parametric smoothing on five data points…

4. Understand Assumptions

Any statistical method involves making assumptions. It is important to understand what those assumptions are before you judge whether the method is being used appropriately. For example, is it valid to assume a straight-line relationship between response and covariate in a regression? Or would a simple parametric relationship suffice? Only if the answer to both questions is “no” would I expect to see an advanced model using a smoother (for example, a generalised additive model) as part of the final results.

Understand assumptions: When ordering “pizza and chips”, not understanding assumptions can lead to surprising conclusions.
Understand assumptions: When ordering “pizza and chips”, not understanding assumptions can lead to surprising conclusions.

A good general rule is to try to use the simplest possible method whose assumptions are valid in any analysis; why make life difficult for the sake of it? Suggesting unnecessarily complicated methodology makes things harder for the authors, the editors, for you as the reviewer (when the revised paper comes back), and finally, (if published) the readers. Doing that isn’t impressive, it’s foolish.

5. Match the Method to the Assumptions

The trick is to look at the method used in the paper, to understand what assumptions are made by the method and then to use your expert knowledge to judge whether those assumptions are reasonable. If the assumptions are not stated clearly, you’re well within your rights to ask that they are in a revised version. This is a process which should involve both statisticians and ecologists; I know what assumptions a method makes, but I need an ecologist’s input on how justifiable those assumptions are in practice for any given example.

6. There is Rarely a Single Correct Method

There might be doubt about what assumptions are reasonable in any analysis. Just because the assumptions you’re prepared to make differ from the authors’, it doesn’t always follow that either of you are wrong. Are the authors’ assumptions completely unjustifiable? You should only insist on a different statistical method if the answer is “yes”.

7. p-values are not the Work of the Devil

They really aren’t. That’s my claim, and I’m a Bayesian. A paper should not be rejected just because it uses p-values and, as confidence intervals are just p-values rearranged, recommending them isn’t going to change anything. Of course, if a paper contains p-values or AIC statistics aplenty but not a single effect size, then that’s a different story. But p-values are at least sensible if a paper is setting out to test or examine a small number of clearly defined hypotheses.

8. Beware of Uninformed Pronunciamentos*

Occasionally we see articles or editorials proclaiming a statistical approach is worthless. A recent editorial in a psychology journal banned most statistical inference (leaving the door ajar only to possibly allow some Bayesian analysis to try to sneak through). Elsewhere, in ecology journals some authors have gone to great lengths to claim that accounting for things like spatial autocorrelation or detection probability is unimportant. In reality, things are almost never that simple, so beware of believing in papers or books which denounce for all time some statistical method; the rebuttal is usually not far behind.

*Thanks to Stuart Hurlbert for this marvellous phrase

9. Recommend a Statistician Look at the Paper

There’s no shame in this! Some journals have statistical “panels” who get called upon to look at papers only if a query has been raised by one of the reviewers, or if an editor feels there are issues with the paper which require input from a statistician (with medical journals it’s not uncommon for statistical reviewers to be paid). The problem in ecology may be that there are so many papers with substantial statistical content that, for many journals, a statistical panel could in theory see almost all papers submitted and the role would be unmanageable. In practice, the panel might perhaps only be drawn upon in the most contentious cases or where the reviewers and authors are having an especially “lively” disagreement. Bringing in a statistician early on can make the process much smoother for everyone involved.

10. Read up on the Statistical Methods

Are you entirely confident you fully understand the stats in the paper? It cannot hurt to check your own understanding during the review process. There may be new research into the methodology you weren’t aware of – so use the opportunity afforded you by reviewing to refresh your own knowledge, or even to learn something entirely new. Then you can make a proper, valid assessment of the statistical content of the paper.

Hopefully these ten tips will help bring the level of statistical reviewing in ecology up to the high standards of statistical analysis which can be found in ecological journals.

Thanks to Alison Johnston for helpful comments on an early draft of this post and, in particular, for suggesting tip #10.

Watch Mark’s presentation on ‘Model Selection and the Cult of AIC’ at the Methods in Ecology and Evolution 5th Anniversary Symposium here.

21 thoughts on “Ten Top Tips for Reviewing Statistics: A Guide for Ecologists

  1. Excellent article. Can I add also “remember that the purpose of statistical analysis is to help us understand the patterns in the data – statistics is a means to an end not an end in itself”.

  2. “confidence intervals are just p-values rearranged, recommending them isn’t going to change anything”

    it certainly changes the ability to easily communicate (including to oneself) the error interval of an estimate, which to me is usually more interesting than P(D|H) if H is effect = zero.

    But I do agree that CIs are frequently used just like a p-value – that is, the focus is if the interval covers zero – in which case there is little gain.

  3. Great blog, however, I agree with Jeff’s point. Confidence intervals allow more and better interpretation than do p values. If CIs are converted to p then sure, by definition they are no better than p, but in fact CIs are an excellent addition to effect sizes because they allow us to estimate how precise our effect sizes are. Thus effect sizes and CIs allow us to investigate the question: ‘how big is the effect according to my sample-based estimations and how precise is my estimate?’, which is typically far more useful than the NHST question: ‘is there a significant effect?’.
    http://www.nature.com/nmeth/journal/v12/n3/full/nmeth.3288.html

  4. Thank you for this very nice blog post. I have almost given up sending articles to some ecology journals because of the opposite problem: the reviewers want data analyses that they understand even though the assumptions are violated. I’ve spent a lot of time analyzing data with methods that are newer (10-20 yrs old) and more appropriate for the data at hand, but we get nonsense reviews asking us to go back and redesign observational data collection, and to redo analyses that do not apply. I’m down to my last 2-3 papers with my collaborators, and then I’m washing my hands of this. There are other application areas where editors of journals make an effort to find appropriate reviewers who understand both the science and the statistical methodology, and where my time and effort will be better rewarded.

  5. Dr Brewer, thanks for this insightful article. It really explained a lot in a clear manner. All of the points are well said. On a personal note, creating an analysis out of a statistical data is really difficult. This would benefit me in my critical thinking and writing whenever dealing with data…

  6. Hello There. I found your blog the use of msn. That is a really
    smartly written article. I’ll make sure to bookmark it and return to learn more of your helpful
    info. Thank you for the post. I will certainly comeback.

  7. If all reviewers were to follow this guide, even the so called big journals in ecology were not just going to reject our papers. Not attacking the big journals, majority of papers that they publish are not understood by more than 95% of researchers and they don’t transform society. KISS please in papers for them to be helpful, the majority of ecologists, especially in Africa.

  8. Whats up very nice blog!! Man .. Beautiful ..

    Amazing .. I will bookmark your web site and take the feeds also?
    I’m happy to search out so many helpful information right here within the post,
    we’d like develop extra strategies in this
    regard, thanks for sharing. . . . . .

Leave a comment