Capturing the Contribution of Rare and Common Species to Turnover: A Multi-Site Version of Generalised Dissimilarity Modelling

Post provided by Guillaume Latombe and Melodie A. McGeoch

Understanding how biodiversity is distributed and its relationship with the environment is crucial for conservation assessment. It also helps us to predict impacts of environmental changes and design appropriate management plans. Biodiversity across a network of local sites is typically described using three components:

  1. alpha (α) diversity, the average number of species in each specific site of the study area
  2. beta (β) diversity, the difference in species composition between sites
  3. gamma (γ) diversity, the total number of species in the study area.
Two tawny frogmouths, a species native to Australia. ©Marie Henriksen.

Two tawny frogmouths, a species native to Australia. ©Marie Henriksen.

Despite the many insights provided by the combination of alpha, beta and gamma diversity, the ability to describe species turnover has been limited by the fact that they do not consider more than two sites at a time. For more than two sites, the average beta diversity is typically used (multi-site measures have also been developed, but suffer shortcomings, including difficulties of interpretation). This makes it difficult for researchers to determine the likely environmental drivers of species turnover.

We have developed a new method that combines two pre-existing advances, zeta diversity and generalised dissimilarity modelling (both explained below). Our method allows the differences in the contributions of rare versus common species to be modelled to better understand what drives biodiversity responses to environmental gradients. Continue reading

piecewiseSEM: Exploring Nature’s Complexity through Statistics

Post provided by Jonathan S. Lefcheck

Nature is complicated. As a scientist, you might say, “Well, duh,” but as students of nature, this complexity is probably the single greatest challenge we must face in trying to dissect the hows and whys of the natural world.

History is a Set of Lies Agreed Upon: Moving beyond ANOVA

For a long time, we tried to strip this complexity away by conducting very controlled experiments adhering to rigid designs. The ‘two-way fully-crossed analysis of variance’ will be familiar to anyone who has taken even the most basic stats class, because, for many decades, it was the gold standard for any experiment.

It might be tough to manipulate this whole reef.

The problem is: the real world doesn’t adhere to an ANOVA design. By this, I mean that by their very nature, manipulative experiments are artificial. It’s hard—if not impossible—to manipulate an entire forest or a coral reef, and as such, we retreat to more tractable, smaller investigations. There is certainly a lot of value in determining whether the phenomenon can occur, but these tightly regulated designs say nothing about whether they are likely to occur, particularly at the scales most relevant to humanity.

To get at the latter point, we must leave the safety of the greenhouse. However, our trusty ANOVA toolbox isn’t very useful anymore, because real-world data often violate the most basic statistical assumptions, not to mention the presence of numerous additional influences that may drive spurious relationships. Continue reading

Animal Density and Acoustic Detection: An Interview with Ben Stevenson

David Warton (University of New South Wales) interviews  interviews  Ben Stevenson (University of St Andrews) about his 2015 Methods in Ecology and Evolution paper ‘A general framework for animal density estimation from acoustic detections across a fixed microphone array’. They also discuss what Ben is currently up to, including an interesting new method for dealing with uncertain identification in capture-recapture, published in Statistical Science as ‘Trace-Contrast Models for Capture–Recapture Without Capture Histories’.

Continue reading

moveHMM: An Interview with Théo Michelot

David Warton (University of New South Wales) interviews Théo Michelot (University of Sheffield) about an article on his recent R package moveHMM in Methods in Ecology and Evolution. David and Théo also discuss the case study in the paper – on the understudied wild haggis – and what advances could be made to the package in future.

Continue reading

Estimating Shifts in Species Distribution: An Interview with James Thorson

David Warton (University of New South Wales) interviews James Thorson (NOAA) about his paper Model-based inference for estimating shifts in species distribution, area occupied and centre of gravity. The article is included in the August 2016 issue of Methods in Ecology and Evolution.  They discuss how to estimate changes in distribution shifts accounting for changes in the spatial distribution of sampling intensity, James’ current workplace NOAA, his academic background and what trouble he is planning to get up to next.

Continue reading

Uncertainty in biological monitoring : An interview with Viviana Ruiz-Gutierrez

David Warton (University of New South Wales) interviews Viviana Ruiz-Gutierrez (Cornell University) about her recent paper Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias. They discuss the main contributions of the paper, the effect false positives can have on occupancy estimates (when not accounted for) and her current position at Cornell. They finish off (in Spanish!) discussing the next step in her research agenda.

Continue reading

State-and-Transition Models: An Interview with Marie-Josee Fortin

David Warton (University of New South Wales) interviews Marie-Josee Fortin (University of Toronto) about a recent article on state-and-transition models from her group in Methods in Ecology and Evolution. David and Marie-Josee also discuss what motivated her career to date in spatial ecology, and what she sees as the main advances in this area and current challenges in the field.

Continue reading

Statistical Ecology Virtual Issue

StatEcolVI_WebAdAt the last ISEC, in Montpellier in 2014, an informal survey suggested that Methods in Ecology and Evolution was the most cited journal in talks. This reflects the importance of statistical methods in ecology and it is one reason for the success of the journal. For this year’s International Statistcal Ecology Conference in Seattle we have produced a virtual issue that presents some of our best recent papers which cross the divide between statistics and ecology. They range over most of the topics covered at ISEC, from statistical theory to abundance estimation and distance sampling.

We hope that Methods in Ecology and Evolution will be equally well represented in talks in Seattle, and also – just as in Montpellier – some of the work presented will find its way into the pages of the journal in the future.

Without further ado though, here is a brief overview of the articles in our Statistical Ecology Virtual Issue: Continue reading

Bringing Ecologists and Statisticians Together for the Conservation of Endangered Species

Post provided by Cecilia Pinto and Luigi Spezia

The Benefits of High Frequency Data

One of the tagged flapper skates showing the three different kinds of tags. ©Cecilia Pinto

One of the tagged flapper skates showing the three different kinds of tags. ©Cecilia Pinto

High frequency data, like those obtained from individual electronic tags, carries the potential of giving us detailed information on the behaviour of species at the individual level. Such data are particularly useful for marine species, as we can’t observe them directly for long periods of time.

Understanding how individuals use water columns – both at daily and seasonal scales – can help define conservation measures such as restricting fishing activity to reduce by-catch or defining protected areas to help recovering populations or protect spawning and nursery areas. High frequency data have become popular as they give insight to detailed individual foraging behaviour and therefore the specific energetic needs that are linked to reproduction and fitness. Continue reading

My Entropy ‘Pearl’: Using Turing’s Insight to Find an Optimal Estimator for Shannon Entropy

Post provided by Anne Chao (National Tsing Hua University, Taiwan)

Shannon Entropy

Not quite as precious as my entropy pearl

Not quite as precious as my entropy pearl ©Amboo Who

Ludwig Boltzmann (1844-1906) introduced the modern formula for entropy in statistical mechanics in 1870s. Since its generalization by Claude E. Shannon in his pioneering 1948 paper A Mathematical Theory of Communication, this entropy became known as ‘Shannon entropy’.

Shannon entropy and its exponential have been extensively used to characterize uncertainty, diversity and information-related quantities in ecology, genetics, information theory, computer science and many other fields. Its mathematical expression is given in the figure below.

In the 1950s Shannon entropy was adopted by ecologists as a diversity measure. It’s interpreted as a measure of the uncertainty in the species identity of an individual randomly selected from a community. A higher degree of uncertainty means greater diversity in the community.

Unlike species richness which gives equal weight to all species, or the Gini-Simpson index that gives more weight to individuals of abundant species, Shannon entropy and its exponential (“the effective number of common species” or diversity of order one) are the only standard frequency-sensitive complexity measures that weigh species in proportion to their population abundances. To put it simply: it treats all individuals equally. This is the most natural weighing for many applications. Continue reading