A Whitepaper by Dr. Nancy J Stark
In August of 2011 FDA released a draft guidance document titled "Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring." At first I was very excited because the monitoring guidelines had not been updated since the 1988 "Monitoring of Clinical Investigations." But then, as I began to consider the ramifications of implementing the new guidance for medical device investigations I started to have mixed feelings. The first part of this whitepaper is a factual review of the draft guidance with ideas about how device manufacturer's can take advantage of it. The second part (next blog) is an op-ed expressing my views about what it will mean for the device industry. [1.Risk-Based Monitoring Guidance] [2. Monitoring of Clinical Trials]
Two points stand out as noteworthy before we begin: 1) the draft guidance was written by the Office of Compliance not the Office of Evaluation, and buy-in from an IDE reviewer is uncertain, and 2) the draft guidance addresses itself to drugs with a mere mention of devices as an afterthought, so most concepts have to be 'retrofitted' to device needs.
Part One: Review of Guidance
The draft guidance is based on the premise that most sponsors conduct on-site monitoring visits every 6-8 weeks with the goal of source verifying 100% of the data 100% for 100% of the subjects. It uses this premise to argue that sponsors will save money by moving to risk-based monitoring because such a move will result in fewer monitoring visits. Risk-based monitoring probably will result in fewer on-site visits for big pharmaceutical firms, who may be 'conducting hundreds to thousands of trials in locations around the world'. [3. Quality Management in Clinical Trials] But start-up medical device companies have never monitored as heavily as the guidance assumes and, in that sense, the principles risk-based monitoring won't save on monitoring costs.
The guidance suggests that we use fancy statistical algorithms and analyze real-time data as it comes into electronic data capture systems to identify high-risk sites or high-risk data. The algorithm could be something like 'Risk Priority Number = severity x likelihood of occurrence x detectability'. A Risk Priority Number is calculated for each identified risk. When high-risk sites or data exceed a trigger, they merit immediate, onsite monitoring visits. [3. Quality Management in Clinical Trials]
CDG's Risk-Based Monitoring Course on CD—Available Now!
Training on risk-based monitoring is now available from CDG! The course covers the details of the FDA guidance 'A Risk-Based Approach to Monitoring', recommendations on using it for device studies, include a sample monitoring plan and template, and features QnA addressing issues unique to device manufacturers. The well-researched course is designed and presented by Dr. Nancy J Stark. Click here for more information or call 773-489-5706.
High risk sites and high risk data
A high-risk site is a site with a high probability of having a problem; the 'symptom' might be that they are enrolling at a slower or faster rate, experiencing a higher or lower rate of adverse events, or having a higher rate of non-compliances than other sites. For example, one site I worked with suddenly stopped enrolling subjects altogether. The sudden change in enrollment pattern should have triggered an on-site monitoring visit. We eventually learned the site had enrolled the same subjects in multiple trials and compromised everyone's data. The site stopped enrolling because they were being audited by other sponsors for fraud.
High-risk data might be a particular type of event or outcome that is a 'symptom' of a problem with the study. For example, a high rate of post-procedure embolism might indicate a poorly trained investigator or a problem with subject selection.
The guidance suggests that any site in the start-up phase of the study is a high-risk site and that on-site monitoring should be performed at every site in the study's early stages.
Device companies can adopt the same principles using low-cost methods. Have the sites fax or scan the case report forms to the monitor once a week for off-site monitoring (i.e. remote monitoring). With one trial and maybe 100 subjects, a monitor can easily examine the case report forms manually for high-risk sites or data without the need for statistical algorithms.
Critical data and processes
Critical data and critical processes are the types of data and processes most likely to be high risk. Critical data include:  evidence that the subject really exists,  evidence that subject has the disease or condition being studied,  evidence that IRB approval was obtained,  evidence that informed consent was signed before the intervention or procedure,  endpoint data that address the hypothesis, and  serious adverse device effect data. For each study you'll need to make a unique list of endpoint data and adverse device effect data that is critical.
Critical processes include  the process of obtaining informed consent,  the intervention or procedure,  timely reporting of regulatory non-compliances or study deviations,  timely correspondence with a data safety monitoring board (if one exists), and the like. Because studies vary so much from each other, you'll need to make a unique list of critical processes for each one.
Critical data and processes are most likely to be high risk and should be checked for closely during the off-site monitoring process.
CDG has monitors experienced in risk-based monitoring
Our outsourced monitors come in with the attitude of service, impermanence, and an enthusiasm for the new; they are accustomed to off-site monitoring and problem spotting. They also carry a breadth of experiences that can help solve the issues de jour. Click here to request a proposal for monitoring services from CDG or phone 773-489-5721.
Methods of monitoring
Next the guidance considers two different methods of monitoring; on-site monitoring and 'centralized monitoring', meaning the real-time analysis of data by an application running on a server in some data management center.
I propose that for device studies there are four methods of monitoring:  on-site monitoring,  off-site monitoring of faxed or scanned case report forms,  off-site monitoring of electronic ally captured data, and  off-site monitoring of source data that has been captured by the device (usually in vivo or in vitro diagnostic devices.) Often data card is changed out on-site and the data card analyzed off-site. Sometimes the data card may transmit data electronically via the internet—consider a digital camera used to take photographs of the procedure. In the future, of course, it may even be possible to remotely access electronic medical records.
The idea, then, is to use the most expensive method of monitoring (on-site) for sites or data or processes that are critical and high risk and to use less expensive methods of monitoring (off-site or remote) for sites and data that are less so. It is a welcome concept to device manufacturers who are cash-starved and looking for ways to economize.
The idea of a trigger is that some server at a data management center somewhere is programmed to alert you if critical data or a critical process has become high-risk; i.e. has the value or frequency of a critical data element or critical process step changed in such a way as to demand attention. Indeed, if hundreds or thousands of sites are reporting data every day, you will need a program to screen the information for anomalies or outliers.
But since most device firms are doing only a few studies at a time, having the case report forms faxed or scanned to the monitor for remote viewing is an inexpensive alternative. The monitor checks the forms for,  completeness (no missing data),  contemporaneousness,  legibility,  logic,  adverse device effect reports. The forms can be checked for everything except source verification. The monitor issues and resolves queries for any issues observed. Most triggers are found by simple observation.
Then the case report forms are forwarded to the data center where the data are entered into a local (often non-networked) computer. Here the data may be statistically analyzed to see if a site, data element, procedure or process step is at risk and merits an unscheduled, on-site monitoring visit.
A monitoring plan should be developed for every study. This requirement is not new to sponsors conducting IDE studies or European studies under ISO 14155, but the guidance makes clear recommendations for the format. I recommend you develop a template for a separate document that can be referenced in the protocol, and don't make the monitoring plan a part of the protocol itself.
Section One—Study Description
As a separate document, the first section describes the protocol, investigational device, purpose of the study, and other obvious reference information. You need to describe,  the monitoring approaches you will use for the study,  criteria for determining the timing, frequency, and intensity of planned monitoring activities,  specific activities required for each monitoring method employed during the study, including
reference to required tools, logs, or templates,  definitions of events or results that trigger changes in planned monitoring activities for a particular clinical investigator, and  identification of possible deviations or failures that would be critical to study integrity and how these are to be recorded and reported.
A second opinion is a phone call away
Sometimes to you want to discuss an issue, but implement the solution yourself. If you want a second opinion about clinical, regulatory, biological safety, or reimbursement strategies, sign us up as consultants. Email us here or call Nancy at 773-489-5721.
Section two of the monitoring plan should discuss the communication of monitoring results to management, review boards, and regulatory bodies. It should describe the format, content, timing, and archiving requirements for reports and other documentation. For example, if the monitor receives case report forms by fax or email, it makes sense to require, say, a weekly or monthly report. Section two should discuss the process for appropriate communication of the findings to management, review boards, and regulatory bodies.
Section three of the plan should discuss the management of noncompliance. How, and who, will follow up with investigators found to be non-compliant with the regulations, protocol, or IRB requirements? If non-compliances persist will the site be retrained, terminated from the study, or will some other action be taken? If protocol deviations are detected you need a plan for root cause analyses. Problem-solving has always been a part of a monitor's responsibilities in device trials and it may require an on-site visit to really assess the problem. I have had the root cause to be such simple things as the data transcriber needing spectacles, a three-hole punch, or a dedicated phone line. Eyes-on may be the only way to detect such problems so they can be solved.
Section four of the monitoring plan should describe specific training for monitors and internal data auditors, especially with the detection of triggers and recommendations for on-site monitoring visits. For device studies, I recommend section four also discuss training plans for investigative site personnel. This section should also describe plans for random quality audits.
Section Five—Amendments to Monitoring Plan
Section five describes the process for amending monitoring plans. What events may require review and revision of the monitoring plan and establish processes to permit timely updates when necessary?
Much of sections Two to Five can be incorporated into the clinical research quality management system. By developing procedures for off-site monitoring you won't have to repeat them in every monitoring plan.
 'Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring' FDA, August 2011. http://www.fda.gov/downloads/Drugs/.../Guidances/UCM269919.pdf
 'Guideline for the Monitoring of Clinical Investigations' FDA, January 1988.
 'Quality Management in Clinical Trials', March 2009, Clinical case series from Pfizer.