Post Provided by Graziella DiRenzo
Imagine you’re at the doctor’s office. You’re waiting to hear back on a critical test result. With recent emerging infectious diseases in human populations, you are worried you may be infected after a sampling trip to a remote field site. The doctor walks in. You sit nervously, sensing a slight tremble in your left leg. The doctor confidently declares, “Well, your tests results came back negative.” In that moment, you let out a sigh of relief, the kind you feel throughout your body. Then, thoughts start flooding your mind. You wonder– what are the rates of false negatives associated with the test? How sensitive is the diagnostic test to low levels of infection? The doctor didn’t sample all of your blood, so how can they be sure I’m not infected? Is the doctor’s conclusion right?
Now, let’s say I’m the doctor and my patient is an amphibian. I don’t have an office where the amphibian can come in and listen to me explain the diagnosis or the progression of disease − BUT I do regularly test amphibians in the wild for a fatal fungal pathogen, known as Batrachochytrium dendrobatidis (commonly known as Bd). Diseases like Bd are among the leading causes of the approximately one-third of amphibian species that are threatened, near threatened, or vulnerable to extinction. To test for Bd, and the recently emerged sister taxon Batrachochytrium salamandrivorans (hereafter referred to as: Bsal), disease ecologists rely on non-invasive skin swabs.
Pros and Cons of Skin Swabs
The skin swabs allow us to non-invasively test for the pathogen by swabbing a standardised number of strokes across each limb and the abdomen. This is then followed by quantitative PCR to quantify how many Bd spores each swab picked up— more spores generally means the animal is sicker. Skin swabs are not the ideal way to sample for Bd as sometimes swabs do not detect the spores when they are on an amphibian.
The “gold standard” would be a histological examination of the body where pathogen detectability is 100%, and chytridiomycosis, the disease caused by Bd, can be confirmed. Unfortunately, These involve sacrificing the animal − and we’re here to conserve the frogs, not kill them. So, at this point in time, non-invasive skin swabs and qPCR are the best sampling and diagnostic methods, respectively, to detect Bd.
Non-invasive skin swabs provide a relatively quick method for disease sampling in the field − but this comes at a price. When infection intensities are low, the precision and accuracy of detecting Bd using non-invasive skin swabs and qPCR drops significantly. But, these methods are all the best methods we have for now. We can check for clinical symptoms, such as reddened skin, lethargy, thickening of the skin, but it can be difficult when your patient is squirming in your hands and just wants to jump away from you.
In ‘Imperfect pathogen detection from non-invasive skin swabs biases disease inference’ we describe methods that allow disease ecologists to account for the bias that results from imperfect sampling and diagnostic pathogen detectability (which is defined as the probability of detecting the pathogen, given that the pathogen is present) by collecting multiple samples from a single host or running extra diagnostic tests on a single sample, respectively.
How do we Correct for Imperfect Pathogen Detection?
In this paper, we had two main objectives:
Quantify imperfect Bd detection of non-invasive skin swabs in an amphibian community in El Copé, Panama. We swabbed amphibians twice in sequence, and used a recently developed hierarchical Bayesian estimator formulated by Miller et al. (2012). This was originally used to examine imperfect Bd detection of the diagnostic method qPCR for amphibians. We expected that as host infection intensity increased, the probability of detecting Bdon a skin swab would increase – similar to other studies that have examined the correlation between diagnostic methods and host infection intensity (e., qPCR to detect the causative agent of malaria, Plasmodium sp., in birds; γ interferon and ELISA tests to detect the causative agent of tuberculosis, Mycobacterium bovis, in cattle; and qPCR to detect the causative agent of Lyme disease, Borrelia species complex, in Ixodes uriae ticks).
We found that Bd detection probability from skin swabs was related to host infection intensity, where Bd infections <10 zoospores have <95% probability of being detected. If imperfect Bd detection was not considered, then Bd prevalence was underestimated by as much as 71%. In the Bd-amphibian system, this shows that we need to correct for imperfect pathogen detection in enzootic host populations persisting with low-level infections.
- Formulate a novel Bayesian hierarchical model that accounts for imperfect sampling and diagnostic detection of the pathogen at the same time. We performed a simulation study to explore the ability of this hierarchical model to estimate pathogen prevalence and infection intensity under a variety of scenarios. The scenarios included a mixture of instances when multiple or one sample and diagnostic run were collected per host.
The estimated probability of pathogen infection was less biased and more precise when average infection intensity was high and when both contributors to pathogen detection probability—sampling methods and laboratory diagnostic testing—were high. In general, our results were less biased for pathogen prevalence when the number of samples was increased, rather than the number of diagnostic tests.
Similarly, estimated average infection intensity was less biased and more precise when the probability of pathogen infection was high and at high values of pathogen detection probability. Overall, the estimated infection intensity was less biased if either the number of samples collected or the number of diagnostic runs increased.
Why Should We Correct for Imperfect Pathogen Detection?
Uncertainty in pathogen detection is an inherent property of most sampling and diagnostic tests that can bias results and inference. Usually we test for the presence of a pathogen on an animal once, and assume that the test is error-free (in terms of both presence/absence and quantity of infectious agents). Disease prevalence, or the proportion of animals infected, and mean infection intensity are then calculated using these values, which in turn inform the decisions made by conservation managers.
Imperfect pathogen detection is widely acknowledged in both the medical and veterinary fields, and disease ecologists are slowly starting to adapt their methods. For example, disease ecologists may run multiple diagnostic tests, use multiple criteria for diagnosis, or modify their sampling design to use models that adjust for imperfect pathogen detection. By collecting extra information, in terms of replicate samples or diagnostic runs, this could improve disease inference, provide better parameter estimates and help with comparisons across study systems.
When Should We Correct for Imperfect Pathogen Detection?
This is a tough one. To answer this question, it’s useful to have prior information on false negative error rates for your study system and the type of infection (i.e., systemic, aggregated, etc.), and other biologically relevant information on the study system (e.g., Echinostoma sp. trematodes tend to parasitize the right kidney over the left).
We suggest collecting replicate samples in the field when possible, though we know the increase in cost and effort needed to analyse more samples in replicate on diagnostic equipment. But, if the results from the first set of samples show few pathogen detections, low pathogen prevalence, or low host infection intensity (typical for endemic populations and the invasion phase of an epidemic) it may be worth analysing the second set of samples to calculate imperfect pathogen detection probability. So, having some information on the epidemiological history of the study area is also meaningful.
What does this Mean for the Disease Ecology Literature Already Published?
Most studies, regardless of effort, contain some type of bias (i.e. observer, methodological, etc.). Observer and sampling bias are among the most important things to bear in mind while surveying the literature, researching, or writing a review paper on the patterns of epidemiology. For example, the majority of disease ecology papers published do not account for:
- Host heterogeneity in detectability (i.e. are infected hosts more or less detectable than uninfected hosts?)
- Pathogen heterogeneity in detectability (i.e. what are the impacts of pathogen infection intensity on pathogen detectability?)
Both of these detectability issues lead to biased conclusions. Our main recommendation would be to explore whether the authors accounted for detectability via sampling design or statistical analyses.
Although meta-analyses have tremendous utility in shaping our understanding of pathogen spread and disease, they may provide a skewed perspective by underestimating pathogen prevalence and infection intensity or suggest poor decision making strategies to conservation managers. Ultimately, it’s up to you, the reader, to evaluate the scientific merit, the biases present, and the conclusions drawn by each published article.
To find out more read our Methods in Ecology and Evolution article ‘Imperfect pathogen detection from non-invasive skin swabs biases disease inference’