Measurement

A measurement strategy is a balanced set of measures designed to tell us whether changes we are making are actually improving the systems of care and patient outcomes. The measurement strategy for Project TICKER contains structure, process, and outcome measures to 1) evaluate the patient- and family-centered, safe infrastructure and 2) measure quality outcomes for pediatric cardiac surgery at our institution. The full set of measures presented in this toolkit provides a wide range of options that could be narrowed down depending on the focus of your project, your team’s resources for quality analysis, and the feasibility within your institution of obtaining certain data. As a general rule, most improvement projects should not have such large numbers of measures to follow for improvement. For Project TICKER, we followed a very large number of measures due to the inclusion of a national database. Projects that include measures already collected as part of a database may be different than typical quality improvement projects in that the outcome measures may be much larger in number. In this section, we provide descriptions of the measures with tables outlining the specifications for data collection and analysis of each measure, as well as some tips on presenting your data.

STS National Database

The Society of Thoracic Surgeons (STS) National Database is divided into adult cardiac, general thoracic, and congenital heart surgery divisions. Established in 1989 as a repository for clinical quality and safety indicators, the Database collects over 500 data elements per surgical patient. Operational definitions containing inclusion and exclusion criteria are determined by STS and followed by data managers at contributing institutions. Information regarding participation, harvest schedule, software vendors, and a sample national report can be found on the STS Web site.

Structure Measures

Structure measures indicate aspects of the care system, such as organizational support, infrastructure, and dedication of resources.1,2 Table 1 shows the structure measures recommended by STS, with the exception of teamwork training, which we added. The table presents how to measure progress related to full implementation of each element or partial implementation, if it is in progress. In Getting Started, we describe the purpose of each structure measure and how we addressed it.

Table 1. Project TICKER Structure Measures for Full or Partial Implementation

Process Measures

Process measures should indicate whether certain pieces or steps of an improvement system are functioning at the level desired. For Project TICKER, we focused on teamwork training and integrated clinical pathway (ICP) utilization. Table 2 presents the measures we found to be most critical to understanding our processes. The target populations at our hospital were: (A) All team members caring for inpatient pediatric congenital heart surgery patients, (B) All team members caring for inpatient pediatric congenital heart surgery patients who attended tailored TeamSTEPPS training for Project TICKER, (C) All inpatient pediatric congenital heart surgery cases, and (D) All ASD/VSD/TOF inpatient pediatric congenital heart surgery patients that meet ICP criteria.

In addition to the measures listed in Table 2, we identified supplementary measures as important, but optional, depending on resource availability. The process measures listed below are not included in the measurement table, but examples of charts are available on our dashboard.

  • Learning Benchmarks assessment to evaluate participants’ learning during training.
  • Teamwork behavior scores based on observation (see TENTS Tool for Teamwork Observation: An Instructional Guide).
  • Unit-based use of standardized communication tools, such as brief or debrief.
  • Prophylactic antibiotic administration. Full compliance with this measure includes documentation of antibiotic administration by both anesthesia and perfusion technicians. Neither is collected in the STS database, but may be abstracted from anesthesia and perfusion records at your institution.
  • Peripheral Vascular Lab orders (for use with investigating DVT rates). Our Advisory Council was interested in data related to incidence of deep vein thrombosis among pediatric congenital heart surgery patients. These data are not collected in the STS database, but we were able to obtain billing data for investigational studies from our own institution. Clinicians were responsible for manually reviewing patient records to determine which tests were positive.

Table 2. Project TICKER Process Measures

Outcome Measures

Outcome measures should indicate how patients are affected by the system as a whole. For Project TICKER, this meant understanding whether the care provided was patient-centered, efficient, effective, and free of complications. With the exception of hospital-acquired infection rates, all outcome measures presented below were collected according to STS database operational definitions. If your institution does not yet participate in the STS database, then you might select a few of these measures based on data that you can already obtain at your institution for your patient population while you are preparing for STS database membership.

We used statistical process control methods (SPC)3,4 to study how processes changed over time. Data were presented in control charts with upper and lower control limits, which allow for distinguishing between common cause variation (naturally occurring and always present) and special cause variation (indicating external influence on the process causing it to be out of statistical control). Run charts are a simpler tool that can also be used to detect changes in a process over time and do not require sophisticated analysis, but will not be able to show process stability and capability.

Table 3 presents outcomes measures for our institution that could be analyzed with a control chart, and Table 4 presents those measures that could not be analyzed with a control chart because the incidences were too rare. You will want to follow SPC guidelines and techniques when determining measure feasibility and chart selection for your own institution’s data. The notes presented in the second column are described after Table 4, and the target populations are the same as those described in the process measures section.

In addition to the measures presented in Tables 3 and 4, the following measures are considered important, but optional, depending on your resource availability:

  • Financial implications, such as total lab charges, chest radiographs, pain medication charges, sedation medication charges, cardiac medication charges, total hospital charges.
  • Thromboses ─ deep vein and arterial. As previously mentioned, these data are not collected through STS, but may be available through your organization’s internal databases.
  • Patient safety culture as measured by AHRQ’s Hospital Survey on Patient Safety Culture.

Table 3. Project TICKER Outcome Measures to be Analyzed with a Control Chart

Table 4. Project TICKER Outcome Measures Too Rare to Analyze with a Control Chart

Data Management

In order to use your institution’s STS data and perform analysis as outlined for the outcome measures, you will need to extract the patient-level data and organize it for analysis. This process will vary, depending on resources available at your institution, as well as which vendor and software package you are using. Following are tips we identified as helpful for completing this process:

  • Ask to receive the following elements for each project measure:
      • Unique identifiers for surgery/case
      • Medical record number
      • Dates: admission, surgery, discharge, and complication(s)
      • Primary surgery during that admission
  • Group data according to type: continuous or discrete.
  • For each measure, establish which date is most appropriate and meaningful for subgroup assignment. For example, length of stay may be best grouped according to discharge date, whereas open chest days may be better suited to surgical date. Sort data in chronological order according to the determination.
  • When data are too rare to plot in run charts or control charts, consider communicating as crude incidence rates (see Table 4 for examples).

Data Reporting

Planning for sharing the data should coincide with communication planning, as described in “Getting Started”. In most communications, you will want to include written descriptions of progress and results along with visual representation of data. Following are recommendations related to data reporting:

  • Develop a color scheme for the project and be consistent across various types of communication (i.e., Web site, charts, newsletters, etc.).
  • Include only the most important information in your charts and keep the format simple.
  • Engage key stakeholders to learn what measures they want to review, in what format, and with what frequency.
  • To the extent that your resources allow, tailor the measurement focus to various groups and clinical areas involved. Unit teams will likely want to see their data in more detail or separated from the aggregate.

 

 

Meas Recommendations

 

References

1 Donabedian A. The Definition of Quality and Approaches to Its Management, Vol 1: Explorations in Quality Assessment and Monitoring. Ann Arbor, MI, Health Administration Press, 1980.

2 Donabedian A. The quality of care how can it be assessed? JAMA. 1988;260(12):1743-1748.

3 Provost L, Murray S. The Health Care Data Guide: Learning from Data for Improvement. San Francisco, Jossey-Bass, 2011.

4 Benneyan J. Statistical process control as a tool for research and healthcare improvement. Quality & Safety in Health Care. 2003-12;12:458-64.