An Op-Ed by Dr. Nancy J Stark
FDA's draft guidance, "An Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring" (August 2011) references the ICH-GCPs six times. The ICH-GCPs were written in 1996 by pharmaceutical sponsors and regulators from the US, Europe, and Japan. It is 15 years old and is showing a few signs of age—there is no discussion of central data management or electronic security issues, for example. But because the phrase 'good clinical practice' is in the public domain, ICH was free to adopt it as part of the title of document E6, "Good Clinical Practice: Consolidated Guidance." The Center for Drugs published the guidance in the Federal Register giving it high standing in the regulatory community.
Although used by device manufacturers, the guidance does not work well. It's description of an Investigator's Brochure is chemically-centered not mechanically-centered, investigational devices are stored and dispensed but never installed, there is no mention of training an investigator to use the investigational product, there is no mention of software-controlled devices or software collected data, no mention of caregiver or healthcare provider safety; in other words, for medical devices, the ICH-GCPs are inadequate and out-of-date.
ISO 14155 'Clinical investigation of medical devices in human subjects—Good clinical practice' (2011)—the internationally definitive document for medical device clinical studies—is not mentioned even once; yet the international standard was closely harmonized with ICH-GCP to promote international continuity. I could live with the lack of acknowledgement in FDA's guidance if the same FDA Office hadn't taken such a heavy hand in writing the ISO standard. It is wrong to insist on practice-changing language (such as moment of consent versus moment of enrollment) and then turn your back on the work product as FDA seems to have done.
The draft guidance draws heavily on the ever-increasing presence of electronic data collection. A really cool EDC system can spot a lot of site problems before the human eye ever gets to the data. But these systems are expensive (think in terms of a $1M for a modest study) and unaffordable to small start-ups.
Finally, it was the Offices of Compliance from CDRH, CDER, and CBER who wrote this guidance. While they promise us the inspection manual will be promptly updated to match the guidance, they do not make assurances that the reviewers in the Office of Device Evaluation will follow along. Sponsors are strongly urged to get buy-in on the monitoring plan from reviewers before initiating a clinical trial.
While device manufacturers can use the new guidance to their benefit, often to defend existing practices, they should make certain FDA's Office of Device Evaluation buys into any IDE monitoring plan.
Best Regards,
Nancy J Stark, PhD
President, Clinical Device Group Inc
Dear Nancy,
Yes, it's a pity that it looks like confusion will be ensured for several years on this issue. I think it would have been a good idea for the ISO 14155 specifically to cite ICH as a precedent and propose how they should or might interact or complement each other. Ignoring it might have been a strategic mistake.
Posted by: Patrick | 14 November 2011 at 03:28 AM
The risk based approach has worked well for some companies, but not for others. I know some companies that have actually created their own stats algorithm and some tools are being created. I know that Medidata just recently came out with a tool called TSDV (Targeted Standard Deviation). I am actually in the process of helping a client implementing it and we are excited to see how well and how much money this can save a company. It is estimated to save a few million dollars for large organizations.
One comment I have though in regards to good EDC system that can spot site issues. In my experience the system helps, but the sponsor processes and a good clinical PM implementing good checks within the system and using the good system to its capacity have done a much better job.
I just thought I would share my thoughts.
As always - thank you for such a great paper.
Posted by: Wessam Sonbol | 17 November 2011 at 01:37 PM
Nancy,
I appreciate your white paper. The guidance document emphasis on remote monitoring hinges on the ability to review source documentation remotely. If this is available, I agree that remote monitoring is a less expensive and better use of time than traveling to sites, although there is no replacement for “face time” with study staff at sites.
For example, in a recent study I’m monitoring, I have access to view all CRF’s online and everything looks great for my study site. When monitoring at the site and comparing source documentation against CRF’s, I discovered that the coordinator had over time forgotten the protocol’s inclusion criteria and was recruiting subjects who didn’t meet the inclusion/exclusion criteria. This resulted in many subjects being discontinued from the study. This is not the result of lack of training. The study site was properly trained during study start-up. It’s a result of human error. Over time, site staff can recall or misinterpret the protocol/training and erroneously forget details of study procedures.
I strongly believe the best monitoring plan is a combination of reviewing data remotely for inconsistencies and clerical errors and monitoring on site. As a monitor, I have found that study sites begin a study with excitement and enthusiasm. As a study progresses, study coordinators lose interest in a study and study details if they don’t feel a sense of team work with the study monitor. While phone calls and emails can be good for keeping in touch and resolving day to day study issues, there’s no replacement for on-site visits. I have seen a direct correlation between face time/monitor interaction with study sites and good quality work and the site’s excitement and commitment to a study.
For example, a recent study site was slow in sending CRF’s in to the sponsor and slow to provide anything requested by the sponsor. It appeared that enrollment had merely slowed down. I hounded with emails and voice mails, with little response. When on site for a monitoring visit, I can see that the coordinator is eager to please and provide anything I request while I’m on site. She’s busy with several projects and mine was a low priority until I am on site. While there, the coordinator shares that she isn’t sure if the CRF’s are completed correctly, so she just holds them and doesn’t send them in. Enrollment had continued. We as the sponsor just weren’t aware of it, because the coordinator was holding the CRF’s. Coordinators may not feel comfortable admitting they need some guidance or have questions on how to perform some study tasks. I’ve found that they share openly while I’m on site, where a sense of team work is built. As part of a team, the coordinators feel safe asking the little questions. Little questions unanswered and lead to bigger issues, resulting in incorrect data or low enrollment.
As always, thank you for your insight. I look forward to your next posting!
Posted by: Lynette Chiapperino | 30 November 2011 at 04:06 PM