Skip directly to site content Skip directly to page options Skip directly to A-Z link Skip directly to A-Z link Skip directly to A-Z link
Volume 15, Number 8—August 2009
Policy Review

Strategy to Enhance Influenza Surveillance Worldwide1

Justin R. Ortiz, Viviana Sotomayor, Osvaldo C. Uez, Otavio Oliva, Deborah Bettels, Margaret McCarron, Joseph S. Bresee, and Anthony W. MountsComments to Author 
Author affiliations: University of Washington, Seattle, Washington, USA (J.R. Ortiz); Ministerio de Salud, Santiago, Chile (V. Sotomayor); Instituto Nacional de Epidemiología, Mar del Plata, Argentina (O.C. Uez); Pan American Health Organization, Washington, DC, USA (O. Oliva); Centers for Disease Control and Prevention, Atlanta, Georgia, USA (D. Bettels, M. McCarron, J.S. Bresee, A.W. Mounts)

Main Article

Table 3

Influenza surveillance evaluation and recommended quality indicators*

1. Timeliness
a. Several time intervals are appropriate for routine 
 measurement as quality indicators. These include the 
 duration of time from
i. Target date for data reporting from the sentinel site to the 
 next administrative level until the actual reporting date
ii. Target date for data reporting from the next 
 administrative level to the national level until the actual 
 reporting date
iii. Date of specimen collection at facility until shipment to 
 laboratory
iv. Date of result availability in laboratory until date of report 
 to referring institution and physician
v. Date of receipt of specimen in the laboratory until result 
 availability
b. Metrics. Two metrics can be used to reflect timeliness 
 indicators:
i. Percentage of time that a site achieves target for 
 timeliness
ii. Average number of days for each interval over time for 
 each site
2. Completeness
a. Percentage of reports received from each site with complete 
 data
b. Percentage of data reports that are received
c. Percentage of reported cases that have specimens 
 collected
3. Audit. Regular field evaluations and audits at facility level of a subset of medical records to ensure
a. Cases are being counted appropriately and not being 
 underreported
b. Reported cases fit the case definition
c. Epidemiologic data are correctly and accurately abstracted
d. Respiratory samples are being taken, stored, processed, 
 tested, and shipped properly and in a timely fashion from all
 those who meet sampling criteria
e. Sampling procedures are being done uniformly without 
 evidence of bias
4. Data to be followed and observed for aberrations over time
a. Number of cases reported by month for each site
b. Number of specimens submitted by month for each site
c. Percentage of specimens that are positive for influenza
d. Number and percent of ILI and SARI cases tested

*ILI, influenza-like illness; SARI, severe acute respiratory illness.

Main Article

1A prior version of this protocol was presented in poster form at the Options for the Control of Influenza Conference in Toronto, Ontario, Canada, June 17, 2007.

Page created: September 30, 2010
Page updated: September 30, 2010
Page reviewed: September 30, 2010
The conclusions, findings, and opinions expressed by authors contributing to this journal do not necessarily reflect the official position of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors' affiliated institutions. Use of trade names is for identification only and does not imply endorsement by any of the groups named above.
file_external