« Risk-Based Monitoring—The New FDA Guidance | Main | Relationship Building in Clinical Trials »

13 November 2011



Dear Nancy,
Yes, it's a pity that it looks like confusion will be ensured for several years on this issue. I think it would have been a good idea for the ISO 14155 specifically to cite ICH as a precedent and propose how they should or might interact or complement each other. Ignoring it might have been a strategic mistake.

Wessam Sonbol

The risk based approach has worked well for some companies, but not for others. I know some companies that have actually created their own stats algorithm and some tools are being created. I know that Medidata just recently came out with a tool called TSDV (Targeted Standard Deviation). I am actually in the process of helping a client implementing it and we are excited to see how well and how much money this can save a company. It is estimated to save a few million dollars for large organizations.

One comment I have though in regards to good EDC system that can spot site issues. In my experience the system helps, but the sponsor processes and a good clinical PM implementing good checks within the system and using the good system to its capacity have done a much better job.

I just thought I would share my thoughts.

As always - thank you for such a great paper.

Lynette Chiapperino

I appreciate your white paper. The guidance document emphasis on remote monitoring hinges on the ability to review source documentation remotely. If this is available, I agree that remote monitoring is a less expensive and better use of time than traveling to sites, although there is no replacement for “face time” with study staff at sites.

For example, in a recent study I’m monitoring, I have access to view all CRF’s online and everything looks great for my study site. When monitoring at the site and comparing source documentation against CRF’s, I discovered that the coordinator had over time forgotten the protocol’s inclusion criteria and was recruiting subjects who didn’t meet the inclusion/exclusion criteria. This resulted in many subjects being discontinued from the study. This is not the result of lack of training. The study site was properly trained during study start-up. It’s a result of human error. Over time, site staff can recall or misinterpret the protocol/training and erroneously forget details of study procedures.

I strongly believe the best monitoring plan is a combination of reviewing data remotely for inconsistencies and clerical errors and monitoring on site. As a monitor, I have found that study sites begin a study with excitement and enthusiasm. As a study progresses, study coordinators lose interest in a study and study details if they don’t feel a sense of team work with the study monitor. While phone calls and emails can be good for keeping in touch and resolving day to day study issues, there’s no replacement for on-site visits. I have seen a direct correlation between face time/monitor interaction with study sites and good quality work and the site’s excitement and commitment to a study.

For example, a recent study site was slow in sending CRF’s in to the sponsor and slow to provide anything requested by the sponsor. It appeared that enrollment had merely slowed down. I hounded with emails and voice mails, with little response. When on site for a monitoring visit, I can see that the coordinator is eager to please and provide anything I request while I’m on site. She’s busy with several projects and mine was a low priority until I am on site. While there, the coordinator shares that she isn’t sure if the CRF’s are completed correctly, so she just holds them and doesn’t send them in. Enrollment had continued. We as the sponsor just weren’t aware of it, because the coordinator was holding the CRF’s. Coordinators may not feel comfortable admitting they need some guidance or have questions on how to perform some study tasks. I’ve found that they share openly while I’m on site, where a sense of team work is built. As part of a team, the coordinators feel safe asking the little questions. Little questions unanswered and lead to bigger issues, resulting in incorrect data or low enrollment.

As always, thank you for your insight. I look forward to your next posting!

The comments to this entry are closed.