You are here: Epi News

Epi News

Welcome to the Epi News Page. The objective of this page is to keep UF DVM students and participants in our Epi & Laboratory professional development program informed with updates relevant to their academic/professional work.

VEM 5503 Veterinary Epidemiology: Q&A

Jorge Hernandez - Friday, January 06, 2012

VEM 5503 Vet Epi students can email course-related questions to Fiona Maunsell (maunsellf@ufl.edu), Jorge Hernandez (hernandezja@ufl.edu), or Owen Rae (raeo@ufl.edu) for feedback. Q&A will be posted on this Blog; the name of the student will be kept confidential.

Lecture: Influenza surveillance in the United States_February 17th

One question in class was the expected sensitivity (high or low) in a syndromic surveillance system, such as that used for early identificaction of human cases infected with influenza virus (eg, acute onset of fever and cough with one or more of the following: sore throat, myalgia, arthralgia, prostration; Clinical Infectious Diseases 2011;52:10-18). One scenario is high sensitivity if an assumption is that a high number of patients known to be infected with influenza virus develops influenza-like illness (ILI) symptoms.

However, to answer this question more appropriately, it is important to know what is the gold standard (virus isolation, PCR, serology) considered for estimation of sensitivity and sampling procedures used. For example, in your homework review question No. 8, serology was used as the gold standard to estimate the sensitivity of the ILI case definition (CID 2011;52:10-18). In that study, among 25 humans for whom complete serologic data were available, 15 had positive serological test results and 5 had ILI symptoms. If serology is used as the gold standard, the sensitivity of the ILI case definition = 33%. The low sensitivity observed can be explained in part by (a) recall bias when humans responded to a survey (eg, remembering symptoms from the previous month) or (b) it is possible humans may have been misclassified as positive for pH1N1 from previous swine flu infections or seasonal flu vaccination. 

Furthermore, only 2 of 15 humans had throat swab specimens that were (RT-PCR) positive for pH1N1 virus. Sampling procedures can affect the ability of a diagnostic test to detect infection (sensitivity). The authors of this study recognized that it is possible humans with ILI were no longer shedding virus 2 weeks after their illness, when the swab specimens were taken.

Review questions_February 9th

Cross-sectional studies are not an appropriate study design for diseases of short duration (new)

This situation applies often in transboundary diseases (eg, foot-and-mouth disease in cattle) where affected herds recover or are sent to slaughter (eg, FMD epidemic in the UK in 2001). If we were to conduct a cross-sectional study before or after the epidemic, we would not see infected herds, prevalence would be zero, and we would not be able to examine the relationship between  exposure factors (eg, introducion of animals from other farms or markets: yes, no) and disease.

In this scenario, an alternative approach is to conduct a (retrospective) case-control study, where infected and non-infected herds are selected. Then, herd owners/managers are interviewed to collect data of epidemiologic interest (exposure factors). The purpose is to compare the frequency of exposure factors between infected and non-infected herds. The hypothesis is that selected exposure factors (eg, introduction of animals from other farms or markets) is higher among case herds, compared to control herds.

 Relationship between predictive value, prevalence, sensitivity and specificity (new)

 Sensitivity is the probability of a test correctly identifying those animals that truly have disease, and a highly sensitive test will rarely misclassify animals that HAVE disease, which means that a highly sensitive test has a low rate of false negatives. If you look at a 2x2 table you’ll see that the sensitivity formula only uses the “Diseased” animal column (a/(a+c)), which is comprised of true positives and false negatives (the only 2 options for a truly positive animal)

Specificity is the probability of a test correctly identifying those animals that truly do NOT have disease, and a highly specific test will rarely misclassify animals that do NOT have disease, which means that a highly specific test has a low rate of false positives. If you look at your 2x2 table, the specificity formula only uses the “Not diseased” animal column (d/(b+d)), which is comprised of false positives and true negatives (the only 2 options for a truly negative animal)

Sensitivity and specificity are INNATE values of a test - they do not change with prevalence.

Positive and negative predictive values of a test, however, ARE affected by prevalence.

PPV is the proportion of test positive animals that have the disease. A positive test could either be a true positive or a false positive. So the highly the rate of false positives the worse your positive predictive value. What controls the rate of false positives (the “b” cell in your 2x2) ? It’s always easier for me to remember this if I draw a quick 2x2 table, but it is Specificity. So for a given prevalence of disease, a more specific test will have a higher PPV.

NPV is the proportion of test negative animals that do not have the disease. A negative test could either be a true negative or a false negative. So the higher your rate of false negatives, the worse your NPV. Sensitivity controls the “c” cell in your 2x2 table, so it is sensitivity that controls your false negative rate. So for a given prevalence of disease, a more sensitive test will have a higher NPV.

Prevalence affects the values at the bottom of the columns, ie how many animals are truly diseased and how many are truly not diseased out of the total population. If the disease has a high prevalence, then a+c will be relatively large and the NUMBER of animals in the false negative block will be relatively high and therefore the even if the test sensitivity is pretty good the negative predictive value will be not so good. So for a high prevalence disease we usually want a very sensitive test to minimize this effect.

If we have a disease that is very low prevalence, so that “b+d” in your 2x2 table is very large, the relative number of false positives (the “b” cell in your 2x2) will be high even if the test specificity is pretty good, meaning that the positive predictive value will not be so good. So for a low prevalence disease, we usually want a highly specific test to minimize this effect.

Note: These concepts of Se, Sp, positive and negative predictive values can be further explored using the Excel file posted on the course website (English version) (third TAB: Se-Sp).

Review questions_February 8th 

On the practice question on top of page 37 of the review, I was wondering why the denominator was 20 and not the average population (19.5) for the determination of the attack and mortality rates since one staff member died?

For attack rate you would probably always use the number exposed as your denominator, as this measure is typically used for a single or point exposure event (like in this example). You would typically (as in this example) measure attack rate AFTER the incubation period for the disease had passed, so you know how many individuals were affected out of the total exposed. 

Lecture: Cohort studies_Monday, January 30th

What's your diagnosis (Pg 18-19)

In your SCAVMA Notes (Pg 18) (first table), the study results show that lameness is not associated with sales price in two-year-old in-training horses. In addition, the study results show (second table) that horses that accumulated less exercise-distance (16-190 furlongs) during the last 60 days before to the sales were sold for more money, compared to horses that accumulated more exercise-distance (191-250 furlongs). Previous published studies have showed that less exercise-distance is associated with a higher risk of musculoskeletal injury in racehorses (because the horse is diseased or injured, and when the horse is re-introduced to high speed training or racing - is not sound).

The results on Pg 19, show that the number of furlongs galloped during the last 60 days before the sales were significantly lower in lame horses with a high commercial value (250 furlongs), compared to lame horses with low commercial value (412 furlongs). We concluded that yearling sale prurchase price can affect the trainers management decision on the amount of training allocation prescribed to horses affected with lameness. Eight furlongs = 1 mile. 

Dose-response relationship (Pg 23)

In your Notes (Pg 23), the answer (D) to Question 1 reads 'There is a linear relationship between ASA and risk of mortality'. The rationale is that the odds of mortality increased from 1.00 (ASA 3) to 3.7 (ASA 4) to 13.4 (ASA 5). However, a more accurate description of this observed association is: A dose-response relationship was observed between ASA clinical scores and the odds of mortality. In epidemiology, this evidence supports a hypothesis that this association is causal. In class, I mentioned that answers B (ASA is associated with risk of mortality) and D are correct. However, the latter is more appropriate (revised answer is: a dose-response relationship was observed between ASA and mortality). I hope this feedback helps clarify the question that was presented in class.

Lecture: Case-control studies_Tuesday, January 24th

Confounding (Epidemiology. Leon Gordis, Ch 15 + Pediatrics 2010;126:477-483)

A problem in observational epidemiologic studies is that we observe a true association and are tempted to derive a causal inference when, in fact, the relationship may no be causal.  If a study of whether factor A is a cause of disease B, we say that a third factor, factor X, is a confounder if the following are true:

Factor X (child fed pet in kitchen) is a known risk factor for disease B (Salmonella infection).

Factor X (child fed pet in kitchen) is associated with factor A (child attends day care), but is not a result of factor A.

In the example presented in class, the association observed between 'child attends day care' and Salmonella infection has two explanations: (1) that attending day care actually causes Salmonella infection, or (2) that the observed association of attending day care may be the result of confounding by a child that feeds her/his pet in the kitchen (where there is evidence that dog food is contaminated with Salmonella). In the study published in Pediatrics (2010;126:477-483), one exposure factor associated with illness was 'child attends day care'. However, the association between 'child fed pet in kitchen' and 'child attends day care' was not examined. If this association had been examined and confirmed, then we could conclude that the observed association between 'child attends day care' and Salmonella infection is not causal.

What is the 95% confidence interval (CI)?

The 95% CI is a parameter used in epidemiologic studies to measure the precision of an estimated measure of association (eg, OR). One is 95% confident in the assertion that the true value (eg, OR) falls within this interval. If a study results in a 95% CI for an OR = 2.1 ranging from 0.90 to 4.90, this means that the interval is calculated according to a principle where by 95 out of 100 intervals give an interval which contains the true OR value. One might  concentrate on the lower limit (0.90) and see that this value falls below the null value (OR =1, no association) and reject that this association is significant at the 0.05 level. However, it is more important to pay attention to the general position of the interval (its two limits). In the example above, while the confidence interval overlaps the null hypothesis value (OR = 1), the likelihood of any given value of the true parameter being estimated is not uniform across the range of values contained in the confidence interval. It is maximum at the point estimate (eg, OR = 2.1) and declines as the values move away from it. The lower limit (0.90) of the interval is very unlikely (as is the upper most value (OR = 4.90). The confidence interval expresses the statistical uncertainty of the point estimate (eg, OR = 2.1) and should not be mechanically and erroneously interpreted as a range of equally likely possible values.

If the interval is too broad and includes values which are consistent with no effect (eg, 0.6) as well as ones which are consistent with considerable effect (60.0), the study is merely uniformative, i.e., its precision is too low.

Lecture: Clinical trials_Wednesday, January 18th

Prophylactic versus therapeutic clinical trials

In prophylactic clinical trials, healthy animals are used because the objective is to prevent a disease event. Example: a clinical trial is conducted to assess the efficacy of a new vaccine to prevent respiratory disease in dogs.

In therapeutic clinical trials, sick animals are used because the objective is to cure a disease condition in affected animals. Example: a clinical trial is conducted to assess the efficacy of a new therapeutic agent for treatment of cancer (eg, melanoma)  in dogs.

Simultaneous versus sequential enrollment of study animals

In general, enrollment of study animals in prophylactic clinical trials is simultaneous. This situation is feasible when a large number of animals (free of disease and that meet all inclusion criteria) are available at one point in time or over a short period of time. This situation is common in clinical trials conducted in large commercial dairy farms with a large number of cows classified as healthy or not affected with the disease event of interest (outcome). One exception I presented in class today was a prophylactic clinical trial to assess the efficacy of hoof health examination and trimming (cow pedicure) at ~ 200 days after calving on reducing the incidence of lameness in Holstein cows during late lactation (eg, 201-300 days after calving) (JAVMA 2007; 230:89-93). In this clinical trial, healthy cows (non-lame cows) were enrolled sequentially when they reach 200 days in lactation.

In general, enrollment of study animals in therapeutic clinical trials is sequential. Animals are assigned randomly into one of two or more experimental groups as they are diagnosed with the disease event of interest. The enrollment process may take weeks or months until the required sample size per experimental group is met.

Simultaneous enrollment of study (sick) animals in a therapeutic clinical trial can have some limitations when the response to treatment is expected to be different between animals with an acute infection and animals with a chronic infection. In the study we discussed in class today (JAVMA 1996;209:1134-1136), the enrollment of cows with papillomatous digital dermatitis (PDD) was simultaneous. One study limitation was that the number of study cows affected with early PPD lesions (strawberry like lesions) or cows affected with late PDD lesions (more mature, papillomatous lesions) in each experimental group was not known.  Clinical observations by dairy veterinarians suggest that cows affected with early PPD lesions respond better to treatment that cows with late (more mature) lesions.

Declaring (yes, no) observed differences (outcomes) relevant and significant: four hypothetical scenarios

Scenario 1: Declaring a difference in incidence of respiratory disease between non-vaccinated and vaccinated horses as relevant and statistically significant (using a P value of 0.05 as a cut-offpoint).

 A study reported that the observed difference in respiratory disease between non-vaccinated horses = 22% and in vaccinated horses = 10%. Results (outcome comparisons) of the analysis produced a P value = 0.01 First, this difference is relevant because the investigators and stakeholders had agreed prior to the implementation of such study that a ≥ 50% reduction would be considered clinically relevant (ie, if incidence of respiratory disease in horses drops from 22% in non-vaccinated horses  to 11% or less in vaccinated horses). Second, this observed difference is (statistically) significant because the P value is < 0.05 (the probability of incurring in Type I error –declaring that there is a difference when in fact there is no difference – is less than 5%). You reject the null hypothesis that the incidence of respiratory diseases is not different between groups and accept the alternative hypothesis (that the incidence of respiratory disease in different).

Scenario 2: Declaring a difference in incidence of respiratory disease between non-vaccinated and vaccinated horses as relevant, but that such difference is not significant (using a P value of 0.05 as a cut-offpoint).

 A study reported that the observed difference in respiratory disease between non-vaccinated horses = 22% and in vaccinated horses = 10%. Results (outcome comparisons) of the analysis produced a P value = 0.30 First, this difference could be considered relevant because the investigators and stakeholders had agreed prior to the implementation of such study that a ≥ 50% reduction would be considered clinically relevant (ie, if incidence of respiratory disease in horses drops from 22% in non-vaccinated horses to 11% or less in vaccinated horses). However, this observed difference is (statistically) not significant because the calculated P value is > 0.05 (the probability of incurring in Type I error –declaring that there is a difference when in fact there is no difference – is greater than 5%). You then accept the null hypothesis that the incidence of respiratory diseases is not different between groups. This situation could occur in studies with a small sample size, leading to inconclusive results.

Scenario 3: Declaring a difference in incidence of respiratory disease between non-vaccinated and vaccinated horses as irrelevant, but significant (using a P value of 0.05 as a cut-offpoint).

 A study reported that the observed difference in respiratory disease between non-vaccinated horses = 22% and in vaccinated horses = 20%. Results (outcome comparisons) of the analysis produced a P value = 0.01 First, this difference could be considered not relevant because the investigators and stakeholders had agreed prior to the implementation of such study that a ≥ 50% reduction would be considered clinically relevant (ie, if incidence of respiratory disease in horses drops from 22% in non-vaccinated horses to 11% or less in vaccinated horses). However, this observed difference is (statistically) significant because the calculated P value is < 0.01 (the probability of incurring in Type I error –declaring that there is a difference when in fact there is no difference – is less than 5%). You then reject the null hypothesis that the incidence of respiratory diseases is not different between groups. In conclusion, the observed difference is statistically significant, but so what? The difference (reduction) is only 2% (22 – 20 = 2%). This situation could occur in studies with a large sample size, leading to (statistically) significant but clinically irrelevant results.

Scenario 4: Declaring a difference in incidence of respiratory disease between non-vaccinated and vaccinated horses as irrelevant and not significant (using a P value of 0.05 as a cut-offpoint).

 A study reported that the observed difference in respiratory disease between non-vaccinated horses = 22% and in vaccinated horses = 20%. Results (outcome comparisons) of the analysis produced a P value = 0.30 First, this difference is not relevant because the investigators and stakeholders had agreed prior to the implementation of such study that a ≥ 50% reduction would be considered clinically relevant (ie, if incidence of respiratory disease in horses drops from 22% in non-vaccinated horses to 11% or less in vaccinated horses). Second, this observed difference is (statistically) not significant because the calculated P value is > 0.05 (the probability of incurring in Type I error –declaring that there is a difference when in fact there is no difference – is greater than 5%). You then accept the null hypothesis that the incidence of respiratory diseases is not different between groups.

END 

Lecture: Sampling and sample size_Monday, January 9th

 I'm curious about one of your stated learning objectives from Thursday's lecture. The fourth objective says that we should know how to calculate sample size. Is this true? It all seemed very confusing, and that it wasn't an easy answer to get. It seemed like you needed all of those programs that you showed us.

The fourth objective on Thursday’s lecture was not accomplished, as we did not have enough time to practice with examples using computer software, particularly OpenEpi. Sample size is a topic that will be reviewed again during lectures in Block II (clinical trials, case control studies, cohort studies, cross sectional studies, and a computer lab scheduled to be held on February 1st).

Calculation of sample size is a relatively straight forward procedure if we know the assumptions (input parameters) required for each study under consideration. For example, in class, I presented an example of how to calculate the number of farms required for estimating the prevalence of equine farms with one or more horses affected with anhidrosis in the state of Florida. We used an Excel file (posted on the course website / EPI TOOLS / 4th link / 2nd tab: prevalence) and the following assumptions: 

Population: 12,750 farms in Florida

Precision: 3%

Expected prevalence: 20%

Sample size = 649 farms

I suggest you visit the course website www.ufvetmedepidemiology.com, download the Excel file on your PC and practice with this specific module for calculation of sample size to estimate prevalence of disease. 

Sample size methods used in prevalence studies, case control studies, cohort studies, and clinical trials, as well as to determine presence/absence of disease, are different. To know specific parameters used in each type of study, I suggest you review your power point handouts (pgs 17-20) and the book Veterinary Epidemiologic Research (Ian Dohoo et al, Ch 2). You can also use office hours for further guidance and assistance. E-mail is the best way to schedule an appointment (hernandezja@ufl.edu). My office is located in Deriso Hall, Rm 120 (Shealy Drive, next to the new parking lot, across the SAH).

Is sample size something that you would expect us to be able to calculate for a test?

Students will not be required to make sample size calculations during mid-term quiz or final exam.

END

Lecture: Sampling and sample size_Friday, January 5th 

There were a few things I wasn't clear about in today's lecture and I was hoping you could clear them up for me: What is the relationship between the P-value and Type I error? Here's what I seemed to understand, but I'm not sure I'm correct:  An increased P-value (evidence for the null hypothesis) increases the probability of incurring a type I error.

The P value is a statistical parameter that is used for significance testing (ie, null hypothesis); to test if we are (or not) incurring in Type I error (eg, declaring that a there is a difference when in fact there is no difference). Yes, an increased P value increases the probability of incurring in Type I error. For example, with a calculated P value of 0.31, we cannot reject the null hypothesis that the observed incidence of respiratory disease is not different between vaccinated and non-vaccinated horses. The probability of declaring in-error that there is a difference is high (ie, > 0.05 or 5%). We only have 69% confidence that we are not mistaking (1.00 – 0.31 = 0.69).

How do you calculate efficacy? You did it really briefly with regard to the 16% of vaccinated horses that had disease, but I don't know what you did....

In the example presented in class, 22% of non-vaccinated horses develop clinical signs of respiratory disease vs 16% in vaccinated horses. In this situation, if the incidence of respiratory disease among vaccinated horses were 11%, we could conclude that the vaccine efficacy was 50% because the frequency of expected cases among vaccinates dropped by half (from 22 to 11%). However, in the example presented in class, 16% of non-vaccinates develop respiratory disease. Thus, the vaccine efficacy has to be less than 50%. One way to calculate vaccine efficacy is by using the following formula:

(Relative Risk – 1) / RR

RR = 0.22 / 0.16 = 1.375

(1.375 – 1) / 1.375

0.375 / 1.375 = 0.27 or 27%

Relative Risk is an epidemiologic measure of association that will be covered in more detail during class on January 11th.

I'm a little confused about "Confidence" - I have two definitions written down for it, but they don't seem compatible. Are they? Or did I interpret something wrong?  - “Confidence” = % of the time that a test correctly declares there is NO difference.

The interpretation is bit different. In the example presented in class (January 5th), the term “confidence” was used first to explain Type I error. With a calculated P value of 0.01, we can reject the null hypothesis that the incidence of respiratory disease among vaccinated (22%) and non-vaccinated (16%) horses is not different, and declare that the observed difference is statistically different at the 0.05 significance level. With a P value of 0.01 or 1%, we have 99% (100 – 1 = 99%) “confidence” that we are not mistaking in our decision to reject the null hypothesis and declare that the observed difference of 22 vs 16% is significant. However, this result does not mean the observed difference is clinically relevant.

  - "Confidence" is the probability that prevalence = prevalence +/- precision (ie. 95% confidence that prevalence = 20% +/- 3%

To calculate prevalence of a disease, three required inputs are (a) expected prevalence; (b) precision; and (c) confidence level (usually 95%). In this situation, we want to design a prevalence study with an expected prevalence of (say) 20% and a precision of 3%, so that we have 95% confidence that if the observed prevalence is indeed 20%, we have 95% confidence that this prevalence (proportion) estimate is between 17 and 23%.

Question number 4 from the baseline quiz is one of the last slides in the notes but I don't think we covered it today. I'm still not clear on the answer...

In case control studies, the frequency of exposure factors is compared between case (disease positive) and control (disease negative) subjects. If smoking is a risk factor for lung cancer in people, then the frequency of smokers (exposure) has to be higher among cases than among controls. Thus, for sample size calculations in cases control studies, one required input is exposure among controls (and cases). This is a topic that we will be covered in more detail in the session (case control studies) of January 24th.

Expected incidence of disease or expected prevalence of disease are two parameters required for cohort studies and prevalence studies, respectively.

One take home message that I mentioned in class yesterday was that to determine “how many animals we need” the answer is not 10, 20, or 30, it depends on the study objective, the type of study under consideration, and the health or disease outcomes of interest; required inputs are different.

END

Graduate course VME 6771 Veterinary Epidemiologic Research

Jorge Hernandez - Tuesday, September 20, 2011

The graduate course VME 6771 Veterinary Epidemiologic Research (3 credits) will now be offered during the Summer semester (instead of the Spring semester). For more information, please contact Dr. Jorge Hernandez via e-mail at hernandezja@ufl.edu or phone call (352) 294 4305 (office).

The seminar series in international veterinay medicine start on Thursday, September 1st

Jorge Hernandez - Tuesday, July 12, 2011
The seminar series in international veterinary medicine will be offered again this year during the Fall semester. The first session will be held on Thursday, September 1st, from 4.30 to 6.00pm in Lecture Hall B. A social gathering (with refreshments) is scheduled from 4.30 to 5.00pm. The seminar starts at 5.00pm (sharp). The seminar series is a one-credit elective course (VEM 5931) open to all UF DVM students. The main objective of this course is to enhance student awareness in global health issues of importance to the public, the veterinary profession, and other disciplines in the health and social sciences. Another objective is to facilitate education/research opportunities for UF DVM students abroad under the supervision of UF faculty and international scholars.  To register, you should attend the first seminar session on September 1st and sign the attendance sheet.

This year, the seminar series will include seminar presentations by speakers from the UF College of Veterinary Medicine, the UF Department of Global and Environmental Health, Colorado State University, the National University of Agriculture in Havana, USDA Foreign Agricultural Services (Washington DC), the Ministry of Agriculture:Veterinary Services in Barbados, the Inter-American Institute for Cooperation on Agriculture (IICA, San Jose, Costa Rica), and the Center of International Cooperation in Agricultural Research for Development (Guadeloupe).

In general, the seminars' format includes a 30-minute power point presentation followed by a 20-30 minute Q&A session. For more information, please visit the UF CVM Office of International Programs (OIP) website.

Sincerely,
Jorge Hernandez
Professor and Director of the OIP

Save