Peer review quality measurement and process standardization
In the second half of the 20th century, concepts and methods for quality measurement and improvement changed dramatically. In the 1960s and 1970s, peer review was mainly based on clinical audits, which took a clinical outcome, such as mortality, and inspected medical records to determine if the standards of care were met. These audits were considered a minimalist approach to meeting The Joint Commission standards and coincided with a hospital quality culture that, at the time, did not see a need for an extensive quality evaluation system.
External peer review became a factor in hospital-based care with CMS’ implementation of Professional Standards Review Organizations in the 1970s followed by Peer Review Organizations (PRO) in the 1980s. These organizations were designed to address both the quality and cost of care for Medicare patients through retrospective audits. On the commercial payer side, pre-certification programs with specific criteria were implemented to address medical necessity. Finally, in the 1990s, some PROs and independent companies began to offer external peer review services to medical staffs either to resolve internal differences of opinion or fill in gaps of expertise.
In 1979, The Joint Commission’s new standards called for a systematic hospital quality assurance (QA) program, which included peer review. Although a step up from the clinical audit, QA was still an inspection model that focused on meeting a minimal standard for competency. One QA peer review standard required that each department use predefined indicators to more systematically identify cases for review. Initially, The Joint Commission did not tell hospitals which indicators to choose or how many. Does that sound familiar? Eventually, a minimum requirement of two indicators per department was established; unfortunately, this standard was still upheld by some QA staff long after it ceased to be a requirement.
In the 1980s, the concept of process standardization known as Total Quality Management (TQM) began infiltrating healthcare as interest in further defining healthcare quality measures increased. The Joint Commission project called “The Agenda for Change” included an attempt to create a mandated, nationally standardized set of clinical performance measures with a promise to hospitals that the data collected would not be made public. Initial efforts-based indicators on statistical analysis from large-scale claims databases. The Joint Commission later decided that the indicators would derive primarily from process measures developed by expert specialty task forces and would use data that was more clinically relevant and required hospital data abstraction. Although primarily focused on hospital performance, many of these indicators held future implications for physician performance like core measures. However, many hospitals objected to the amount of time needed for data abstraction. By 1991, The Joint Commission abandoned the goal of a mandated national indicator reporting system.
In the late 1990s, CMS accomplished by regulation what The Joint Commission had attempted with cooperation. A mandatory indicator reporting system was implemented that outlined core measures and defined outcomes using claims data. This initiative pushed hospitals to invest in software for outcomes analysis and FTEs for data abstraction. CMS also made both the process and outcomes data at the hospital level publicly available. This use of data transparency had an immediate positive impact on compliance rates: Core measures reached far beyond that seen with voluntary compliance with the practice guidelines recommended by medical specialty societies, and the future use of publicly available data for all types of healthcare performance measures was solidified. Because its core was based on physician actions, data transparency also paved the way for peer review to go beyond case review by providing data for physician-relevant rule and rate indicators.
Source: News and Analysis