Emergency Medical Technician-Paramedic Competence and Reregistration with the National Registry of Emergency Medical Technicians
Society has always demanded accountability from its health care professionals, although prior to the 1900s, the public and government often allowed professions sole responsibility for monitoring their own performance. By the turn of the century many people had grown skeptical of the ability and willingness of professionals to monitor their own ranks and to adequately protect the public from incompetent practitioners. The emergence of accreditation and credentialing was the initial mechanism by which the public was assured that health care professionals were competent prior to being awarded the legal right to practice (Flexner, 1910).
National board examinations evolved as the principal means of entering health care professions. These examinations were focused primarily on knowledge assessment, and were not focused on the relationship between knowledge and competent practice. Despite the fact that national board examinations continue to move away from the recall of facts and toward the synthesis of complex information and its application to clinical decision-making (Kalkwarf, 2000), they still primarily assess competence at the point of entry. While this initially seemed adequate, it did not give adequate consideration to the issue of maintaining competence.Public skepticism, malpractice allegations and litigation, in addition to the exponential increase in the rate of change in professional knowledge and practice, forced many professionals and State regulators to adopt a life-long credential model. This model, currently in use, incorporates CME requirements as a primary mechanism of maintaining competence (Gunn, 1999).
The continued assurance of clinical competence is the goal of CME. It is commonly believed that mandatory continuing education automatically means continued competence (Hoffman, 1980), but there is little evidence supporting this assertion.
According to Finocchio et al. (1995), CME requirements generally ask only that the individual attend approved continuing education courses. There is little evidence of a demonstrated relationship between participation in CME and job performance or clinical outcome (Gross, 1994). Courses may not necessarily address the needs of the health professionals. Moreover, there is no assessment of the students’ understanding of the course material. Another issue is that most CME courses are subject to only cursory regulatory review. As a result, there is growing concern over whether mandatory CME courses adequately address the need for continued competence (Swankin, 1997).
In 1995, the Pew Health Professions Commission recommended that boards abandon arbitrary CME requirements and “develop, implement and evaluate continuing competency requirements to assure the continuing competence of regulated health care professionals” (Finocchio et al., 1995). This has remained a major issue. In October 1998, the Taskforce on Health Care Workforce Regulation of the Pew Health Professions Commission published a report titled “Strengthening Consumer Protection: Priorities for Health Care Workforce Regulation.” This report emphasized the critical role that health care workforce regulation plays in consumer protection, not only by regulating initial entry to the profession, but also by maintaining oversight throughout the health care professionals’ careers (Finocchio et al., 1995).
Other than looking solely at the accumulation of knowledge as an indicator of continued competence, some researchers (e.g., Gunn, 1999) now recommend that a multifaceted approach be used to assess those professional attributes deemed essential for achieving quality patient outcomes. However, assessing the full breadth of professional competence is a complex problem. Illustrative of this is the experience of Washington State’s Dental Quality Assurance Commission. It made a serious effort at reforming the assessment of competence, only to discover it could not achieve this goal because it could not determine an acceptable means of evaluating competence (Kinney and Anderson, 1997).
Other recommendations for assessing competence were derived from focus groups conducted by Tilson and Gebbie (2001). These recommendations included changes to the certification process, use of mentoring programs, development of innovative educational material and programs (including the use of distance-based learning), use of professional associations as facilitators, use of professional publications, and increased governmental/agency support. Some researchers (e.g., Karnath, Thornton, and Frye, 2002) have also advocated the use of manikin simulators. Carlson and Kalkwarf (1997) suggest a combination of simulations, continuing education with measurable outcomes, case presentation, and practice audits. However, to date no consensus has been reached on the best way to ensure continued competence.