Topic Area 2:  What are the Most Feasible and Reliable Means of Monitoring/Measuring Medication Use?


  • What are the most feasible and reliable means of identifying medication use by older drivers (including OTC medications and alcohol?)
  • What are the most feasible and reliable ways of ensuring, as best as possible, that potential test subjects are compliant with their medication regimes?

Methods to Identify Medication Use

Discussion Summary

  • When asking people to self-report medications, the manner of questioning is important to get the most complete picture of what people are taking. For example, it may be necessary to go through a review of medical conditions/symptoms/body systems with the patient, particularly to learn about the use of non-prescription medications.  For example, a person may be asked, “What do you use when you have a headache?” or “What do you take when you have cold symptoms, and how often do you take it?”
  • Pharmacists and physicians in the group stated that the gold standard for determining what drugs people are taking is an in-home medicine review, i.e., going into a person’s home to look at and list their medications. The in-home review is a luxury, however, that researchers can seldom afford.  Next best is asking people to bring their medicines to be inventoried during an in-office review (“brown-bag” review).  These two methods provide information that could be omitted when looking at a physician report of medications, because often, more than one physician prescribes for a patient.  Multiple physicians prescribing is also a drawback to the use of some pharmacy databases—for example a Veteran’s Administration pharmacy database would provide an incomplete picture of medication use when patients obtain medications from physicians in the community, in addition to going to the VA to get prescriptions filled. 
  • When conducting the brown-bag method, a gerontologist indicated that it is important to do so in a private, confidential setting (as opposed to a public or social setting), where people are more likely to be forthcoming about medications they use.  It’s also important to remind people to bring in all medications, specifying both prescription and over-the-counter medications.  It may also be useful to provide a checklist of medications used for certain purposes to help them collect everything (headaches, pain, cold, etc.).
  • It may be necessary to ask people what they are taking on two separate occasions: one time as a general overview of what medications they take; and the second time to find out what they took (or didn’t take) on the particular day that driving performance is going to be measured. People may alter their medication regime when they want to perform well (as in a driving evaluation).
  • Self-reports of medication use were considered to be an adequate method by brainstorming session participants, providing that the research subject brings a “significant other” to help collaborate/confirm what is being taken, and how much or how often.  This is particularly important for people with cognitive impairment. 
  • It is advisable to first ask the subject’s physician to complete a form indicating what medications have been prescribed, and then ask the patient, “Are you still taking this? Are you taking anything else?  Is there one physician who knows everything you take and helps you monitor any interactions?” 
  • If a research study is going to include alcohol, ask subjects if they have ever had problems with alcohol, or been to an AA meeting.  Also, ask them to describe their typical alcohol consumption for the week.
  • If physicians are asked to report what medicines a patient is taking, the researcher must get the names of all the prescribing physicians, and then mail a form to each physician (with the patient’s authorization to release the information) for the physician to complete.  Motor Vehicle Administration Medical Advisory Boards often follow this practice.
  • It is good to use multiple approaches to obtain data about medication use—for example, asking people and then checking an administrative claims database.
  • There are databases (pharmacy claims data—both Medicaid and Insurance claims) that may allow researchers to generate and test hypotheses related to drugs and driving impairment.
  • The Centers for Medicare and Medicaid Services (CMS) data on the National Medicaid population (a subset of all older adults who are poorer, sicker, and more have disabilities) contains all drugs provided under a fee-for-service setting—both prescription and over-the-counter.  A cross-sectional examination of such data could determine the volume of polypharmacy in questionable areas, and suggest which drugs/combinations are associated with the most serious driving impairment.  Such a data-driven approach could establish a rationale for further empirical studies of the effects of drugs on driving.
  • A drawback to the use of administrative claims data is that crashes are not coded as “at fault” vs. “not-at-fault.”  The only thing that can be determined with the crash “e-code2” is if the person was the driver.  Another caution is that in “e-code” analyses, crashes are underreported in both Medicaid and Medicare claims data.

Rating Scale Responses

Following discussion of this topic, expert panelists and project consultants completed rating scales to rate the practicality, reliability, and cost effectiveness, and to give an “overall” rating, for each of seven methods of identifying medication use: 

  • Physician’s Reports (mailed requests for what is in the patient’s record).
  • Patient Self Reports (face-to-face).
  • Brown Bag Review.
  • Pharmacy Records (administrative claims databases).
  • In-Home Medicine Review.
  • Mailed Survey to Patient.
  • Proxy Report (concurrence of self-report by a significant other).

Participants placed a letter corresponding to a particular method on a scale that ranged from 1 (the worst rating) to 100 (the best rating), and were advised that more than one method could be designated the same rating, if methods were considered equal.  Participants also completed an “overall” rating that was intended to incorporate all the facets (practicality, reliability, and cost-effectiveness).  Several participants provided ratings for a combination of methods. 

Summary statistics were calculated across all 14 respondents, and were also calculated within 4 areas of specialty: for the 5 physicians and pharmacists (MDs and PharmDs), 3 driving evaluators (OTs/CDRSs), 3 behavioral researchers, and 3 database experts.  These statistics are presented in tables in Appendix C.  The results are briefly described below.

Without combining methods, the methods that received the three highest ratings across all experts in terms of practicality were: pharmacy records (mean rating = 78), patient face-to-face self report (mean =76), and mailed survey to patient (mean = 70).  The method rated as least practical across all experts was the in-home medicine review (mean rating = 36).  Practicality ratings provided by the physicians and pharmacists, the behavioral researchers, and the database experts showed the same three methods with the highest practicality ratings.  Driving evaluators rated mail-in physician reports (mean rating = 77), pharmacy records (mean rating = 77), and mailed surveys to patients (mean rating = 72) as the most practical methods.  

In terms of reliability of methods for identifying medication use, the in-home medicine review was rated the highest by all panelists (mean ratings ranged from 85 to 93 across expert area) followed by the brown-bag method (mean ratings ranged from 75 to 84 across expert area).  The poorest method varied across expert group, with physicians and behavioral researchers indicating proxy reports as the worst, driving evaluators indicating mailed surveys to patients as the worst, and database experts rating mailed requests for physicians’ reports as the worst. 

In terms of cost-effectiveness, the administrative database method (pharmacy claims) was rated the best by the physicians/pharmacists and the database experts, and was in the top three highest rated methods for all other expert areas.  Driving evaluators rated mailed requests for physician reports as the most cost effective, and behavioral researchers rated mailed surveys to patients as the most cost-effective method.  All expert groups rated the in-home medicine review as the least cost-effective method. 

Across all experts groups, overall ratings indicated a preference for the brown-bag review (mean rating = 75), pharmacy records (mean rating = 73), and the in-home medicine review (mean rating = 66) methods, while the proxy report was rated the worst (mean = 51).   There were some interesting differences in how experts from the various disciplines weighted practicality, reliability, and cost-effectiveness in producing their “overall” ratings, however.  For example, the ratings by the driving evaluator group indicated that the best method would be to combine physician reports with pharmacy records and in-home medicine reviews for the best result; or, to combine patient self-reports with pharmacy records, and proxy reports.

Measuring Compliance

Discussion Summary

  • One panelist indicated that when using administrative claims databases and comparing cases (drivers with crashes) and controls (drivers without crashes), one doesn’t really need to be concerned about compliance, because there would be both compliant and non-compliant medication takers in both groups (adjusting for severity of illness).
  • Other experts on the panel agreed that compliance is a tricky area, and also one in which researchers concerned with the effects of (poly)pharmacy on driving need not regard as their primary concern.  To study what and how medications affect driving, it may only be necessary to determine what people were taking at the time they had their driving evaluation and what they took earlier that may still be in their systems. 
  • Two reasons for measuring compliance were noted.  First, people who are prescribed medications for medical conditions but are not taking it may perform worse because their medical condition is not controlled. Also, if a person is compliant sometimes but not others, he or she may not be in a population that is considered to be “stable” dosing, so medication effects are exaggerated.
  • If compliance is going to be measured, it is useful to employ two methods: for a prospective study, patient self-report and pharmacy refill records should suffice.  A second self report should be obtained the day of testing to determine what subjects took in the morning and the day before the driving evaluation.
  • Panelists generally agreed that, if you are going to infer the role of a drug in an outcome, it is useful to have at least a general understanding of the level at which a research participant is compliant.  A rudimentary measure, such as low, medium or high compliance, which could be obtained from an administrative pharmacy database, may be all that’s required.  Compliance could also be adequately assessed with a questionnaire, separating medications into classes (“medicines for sleep,” for example).

Rating Scale Responses

           Two sets of rating scale responses were solicited to capture the discussion group’s opinions in this topic area, one relating to practicality/reliability/cost-effectiveness of alternative methods and the other relating to older persons’ willingness to participate in research.

In the first case, eight methods of measuring compliance to the medication regime were considered by the brainstorming experts:

  • Physicians’ Clinical Judgment.
  • Self-Report (Questionnaire).
  • Patients’ Clinical Response.
  • Biomechanical Measures.
  • Pill Counts.
  • Pharmacy records (Administrative Claims Databases).
  • Electronic Medication Monitoring.
  • Proxy Report.

The first set of ratings addressed practicality, reliability, and cost-effectiveness, as well as an overall rating.  Summary statistics are presented in the tables located in Appendix D for all experts together, and broken out by discipline.  These rating scale results are briefly described below.

Across experts, the most practical methods of measuring compliance were patient self-report questionnaires (mean rating = 74), pharmacy records (mean rating = 71) and proxy reports (mean rating = 65).  Not surprising, the least practical method was biomechanical measures (mean rating = 27).  The physician/pharmacist group responded with this pattern of ratings.  The driving evaluators combined self-report questionnaires with either pharmacy records (1 respondent’s rating = 80) or proxy reports (1 respondent’s rating = 90) for the two best methods, followed by combining physicians’ clinical judgment with patients’ clinical response (1 respondent’s rating = 80).  However, this group rated physicians’ clinical judgment alone as the poorest (mean rating = 8), followed by patient’s clinical response (mean rating = 18).  The behavioral researchers determined that patient self-report questionnaires (mean rating = 78) and pharmacy records (mean rating = 72) were the best method, with biomechanical measures the worst (mean rating = 19).  The database experts indicated that the best method was pharmacy records (mean rating = 87), followed by patient self-report questionnaires (mean rating = 76) and pill counts (mean rating = 68).    They rated the biomechanical measures as the least practical (mean rating = 32).

Across all experts, the most reliable methods were biomechanical measures (mean rating = 77), electronic medication monitoring (mean rating = 72), and pill counts (mean rating = 69).  The least reliable across expert types was physician’s clinical judgment (mean rating = 35).  The same pattern of results was shown by ratings provided by physicians/pharmacists.   Without combining methods, driving evaluators designated proxy reports (mean rating = 78) and biomechanical methods (mean rating = 73) as the most reliable, and physicians’ clinical judgment as the least reliable.  One driving evaluator combined self report questionnaire with pharmacy records and rated this as the best method (mean rating = 85) and one driving evaluator combined three methods and rated the combination as 85: self report questionnaire, electronic medication monitoring, and proxy report.  Behavioral researchers rated electronic medication monitoring the most reliable (mean rating = 87), followed by pill counts (mean rating = 78), and patient’s clinical response (mean rating = 72).  Physicians’ clinical judgment was rated the least reliable by this group.  Database experts rated clinical response as the most reliable (mean rating = 85), followed by biomechanical measures (mean rating = 80), and pharmacy records (mean rating = 75), while physicians’ clinical judgment was rated as the least reliable by this group of experts.

Across all experts, the most cost-effective methods were self-report questionnaire (mean rating = 79), pharmacy records (mean rating = 76), and proxy report (mean rating = 76).  The least cost-effective method was the biomechanical measures (mean rating = 26).  This pattern of results characterized the physician/pharmacist ratings.  Without combining methods, the driving evaluators rated the pharmacy records and proxy reports as the most cost-effective methods (mean rating for each method = 75), and when self-report questionnaires were combined with pharmacy records, one respondent rated the combination as a 95.  The behavioral researchers assigned the highest cost-effectiveness ratings to self-report questionnaires (mean rating = 81) and electronic medication monitoring (mean rating = 80), and the lowest ratings to biomechanical measures (mean rating = 20), physicians’ clinical judgment (mean rating = 21) and pharmacy records (mean rating = 21).  Database experts assigned the highest cost-effectiveness ratings to pharmacy records (mean rating = 90) and self-report questionnaires (mean rating = 78), and the lowest ratings to biomechanical measures (mean rating = 27).

Across all evaluation criteria and all experts (without combining methods), the highest overall ratings were given to self-report questionnaires (mean rating = 68), pharmacy records (mean rating = 66), and pill counts (mean rating = 64), and the lowest ratings were given to biomechanical measures (mean rating = 38).  The physicians and pharmacists demonstrated precisely this pattern of ratings.  The behavioral researchers also assigned the lowest ratings to biomechanical measures, but gave the highest overall rating to patients’ clinical response (mean rating = 76), followed by pill counts (mean rating = 72), and pharmacy records (mean rating = 60).  Database experts provided the highest ratings to pharmacy records (mean rating = 80), followed by self-report questionnaires (mean rating = 74), and electronic medication monitoring (mean rating = 72), while rating physicians’ clinical judgment as the poorest method (mean rating = 40).  Among the driving evaluators, one combined self report with proxy report and electronic medication monitoring for a rating of 100; another combined self report with pharmacy records for a rating of 90; and another combined clinical response and proxy report for a rating of 90.

The other rating scale asked group members for their opinions regarding older persons’ willingness to participate in research, as a function of the method used to measure medication usage.  Fourteen methods were listed, as follows, including selected sub-methods deserving attention in their own right based on information presented in the Literature Review:

  1. Physician’s Clinical Judgment.
  2. Self-Report – Questionnaire.
  3. Patient’s Clinical Response.
  4. Brown Bag – Physician led.
  5. Brown Bag – Pharmacist led.
  6. Brown Bag – Pharmacy-student led.
  7. Brown Bag – Nurse led.
  8. Biomechanical: Saliva.
  9. Biomechanical: Urine.
  10. Biomechanical: Blood.
  11. Pill Counts.
  12. Pharmacy Records (administrative claims databases).
  13. Electronic Medication Monitoring.
  14. In-Home Medical Review.

Summary statistics for these ratings are presented in the Appendix E, for all experts together and broken out by area of expertise.  Results are briefly described below.

Across all experts, the methods that were rated most likely to be acceptable to prospective research participants included the physician-led brown-bag method (mean rating = 74), the pharmacist-led brown-bag review (mean rating = 71), and pharmacy records (mean rating = 71).  Clustered only slightly below these were the pharmacy-student-led brown-bag method (mean rating = 67), the patient self-report questionnaire method (mean rating = 66), and the nurse-led brown-bag method (mean rating = 64).  The methods least likely to be acceptable to research participants, not surprisingly, were the three biomechanical measures, with mean ratings ranging from 28 to 37.  Except for the database experts, all subgroups of experts rated the physician-led brown bag method the highest, followed by the pharmacist-led brown-bag method. Pharmacy database experts rated the pharmacy records method the highest (mean rating = 85), followed by the patient self-report method (mean rating = 73), and electronic medication monitoring (mean rating = 70).  The patient self-report method was also rated high by the physicians/pharmacist group (mean rating = 74). 

2External cause of injury code (E Code) in the range 810.0-816.9 and 819.0-819.9, motor vehicle traffic collision injury. E Codes are a coding system with-in the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), which are routinely entered into the Trauma Registry for each trauma patient.