A Whitepaper by Dr. Nancy J Stark
An old favorite, expanded and enhanced
©2012 Clinical Device Group. All rights reserved.
I see it over and over again. Somehow people think registry studies can have crude protocols, poorly monitored implementation, and insufficient data for analysis and still be good enough to get reimbursement from Medicare. Without solid data we end up with coverage determinations that leave our devices as adjunctive therapies and secondary options, such as: 1) patients with chronic graft versus host disease can only have extracorporeal photopheresis after all standard drug treatment has failed, 1. Extracorporeal photopheresis 2) a complete wound therapy program...must have been...ruled out prior to application of negative pressure wound therapy, 2. NPWT or 3) a CPT code is assigned but no National Coverage Determination is issued for a capsule to study gastric emptying. 3. Ingestible pH and Pressure Capsule You can keep track of these decisions by subscribing to email updates from the Centers for Medicare and Medicaid Services (CMS). 4. CMS
As device people we have to wake up and smell the roses; the decision-makers are only comfortable with randomized, controlled clinical trials. We need to come as close to this mark as possible if we want CMS to issue National Coverage Determination that support robust reimbursement.
Registry Studies versus Clinical Trials
There is only one difference between registry studies and clinical trials: registry studies are observational, they look backwards at what was done without dictating a treatment plan. Clinical trials are investigational, they look forward to what shall be done and dictate and control the treatment plan. Putting it simply, in a clinical trial we tell the investigators how to do it, in a registry study we observe how they actually did it.
Consider this example; a sponsor wants to do an observational study on a commercial device for treating a condition. They need an additional piece of data about the outcome which they forgot to collect in clinical trials. The additional information requires that the patient submit to an unscheduled MRI. This design is not observational. The reason is observational studies—by definition—do not dictate interventions. But the protocol for this study dictates an invasive, unscheduled procedure be given to the subject. It isn't always the primary treatment that moves the study from observational to investigational, it might be an invasive diagnostic procedure you are adding to the mix.
A Few Definitions
There is a reason why there is a hodgepodge of definitions. Many regulatory documents grab onto a word and give it special meaning. For example, Medicare will reimburse for investigational devices but not experimental ones. FDA decides at the time you make an IDE submission whether your technology is investigational or experimental, based on guidelines they have developed. One FDA guidance tried to distinguish between study and trial. Don't take definitions too strictly, think for yourself within the context of what you are reading. For this paper, the following meanings apply.
Observational study: the protocol does not require a specific treatment plan. Used for many purposes, in this whitepaper we focus on their use for reimbursement application.
Investigational study: the protocol does require a specific treatment plan, intervention, or exposure.
Retrospective design: patients are enrolled based on selection criteria after they have received treatment, intervention, or exposure (say, to a contaminated water supply), and then endpoints and outcomes data are collected.
Three Common Errors and Why You Need a Registry Consultant
Device manufacturers make three common errors when they design a registry study: 1) the study is too small, or more accurately, they forget to have a statistician even look at the size and power of the study, 2) the study isn't comparative, a tough one because the standard of practice may be very different from your technology, and 3) the study isn't monitored. A poorly-written protocol obfuscates your thinking, distracts the reader, and frustrates good results.
CDG's experts can review your protocol to help develop a clear hypothesis, a properly sized and powered design, the selection of a comparator or a thoughtful defense of why no comparator is available, suggestions for an affordable monitoring plan, or just an ear so you can double-check your logic. Please phone or email us at 773-489-5721 or cdginc@clinicaldevice.com. Dr. Stark will be happy to discuss a proposal.
Prospective design: patients are enrolled based on selection criteria before they receive treatment, intervention, or exposure (say, to an unscheduled MRI), and then endpoints and outcomes data are collected.
Prospective longitudinal design: endpoints and outcomes data are collected over time (days, weeks, months, years) after treatment, intervention, or exposure.
Prospective cross-sectional design: endpoints and outcomes data are collected in the same hospital stay (i.e. at the same time) as treatment, intervention, or exposure.
Registry: a 'registry' is a list or dataset of records about something, a 'patient registry' is a dataset of records about patients.
Registry study: usually an observational study on a post-approval device, also known as a post-market study. FDA has no role or reason to set forth guidance documents, expect in the rare cases when they have legislative authority under Section 522, because the studies are conducted on 510(k) cleared or IDE approved devices. The registry can be as small or large as your pocketbook will allow, it can be single-armed or comparative depending on how you contrast your technology the standard of practice, it can be monitored or not monitored depending on how much you care about the quality of the data you collect.
Setting up a registry study
Implementing a registry study is just like implementing a clinical trial. All the basic elements of planning, design, and project management are present. There are no international standards for registry studies and FDA cannot provide guidance documents because they don't have regulatory authority over post-market studies (except for Section 522 studies.) Your primary resource for information will be "Registries for Evaluating Patient Outcomes: A User's Guide, Second Edition" from the Agency for Healthcare Research and Quality (AHRQ). Don't discount or ignore this user's guide, AHRQ plays a significant referential role in CMS National Coverage Decisions. 5. Registries
[1] Planning
In the planning phase, you decide what you need to know: identify the hypotheses (mine is better than yours, mine is safer than yours, mine is cheaper than yours), the endpoints or outcomes that will support or refute the hypotheses, and the inclusion and exclusion criteria for who will be in the study. Next you determine if safety monitoring boards, IRBs, or other committees are necessary, and finally plan an exit strategy so you'll know when the study is completed.
A hypothesis is a testable statement about the safety, efficacy, or cost of device use and it sets the design for the rest of the study. Most people falter at this first step. For example, I recently reviewed a protocol that said: "We hypothesize that our device will provide a durable improvement for women with XYZ disease." This hypothesis is a plan, but it is not a testable statement from which treatment success or failure will be clear. How will we know if an improvement was durable, or if there was an improvement at all?
Train Your Staff for Registry Studies per ISO 13485
'Personnel performing work affecting product quality shall be competent on the basis of appropriate education, training, skills, and experience,' states ISO 13485. But training is expensive and you want to get the most bits of information for your dollar that you can. Our CDs are a 'bitful' information buy. You get comprehensive, well-researched information, presented by an internationally respected expert, in a reusable format that you can share with your colleagues and add to your library to train future employees.
CDG offers a five-hour workshop on designing and implementing registry studies for medical devices. It is filled with examples of real-life medical device registries. Designed and recorded by Dr. Nancy J Stark, the workshop is a focused presentation of the AHRQ User Guide, adapted to medical devices. You can find more information on our website. Scroll down to Registry Studies for Medical Devices.
[2] Design
In the design phase, the details of the registry study are worked out and a protocol is written. There are only a few options for study design:
a) Cohort designs follow over time a group of people who possess a common characteristic to see if they develop a particular endpoint or outcome. For example, you might follow a group of women with urinary incontinence, including in your study those women who receive mechanical implants such as collagen and those women who receive implantable electrical stimulators. The hypothesis might be that one treatment is better, safer, or less costly than the other. The outcomes data might measure the number of incontinence episodes per month, the adverse events by month, or the cost of the procedure and lost work time (quality of life) by month. Notice the similarity to a randomized clinical trial? 5. Registries, p38.
b) In case-control designs you gather 'cases' of patients who have a particular outcome or who have had a particular adverse event and 'controls' who have not, and then you look backwards to see what proportion had an exposure or characteristic of interest. For example, in the evaluation of re-stenosis after coronary angioplasty in patients with end-stage renal disease, investigators found both cases and controls from an existing PTCA registry. Alternatively, cases could come from the PTCA registry and controls from outside the registry (say, Medicare data). 5. Registries, p38 and p46.
In another example, in 2004 Cordis began a registry designed to assess stenting outcomes in relation to the outcomes of their SAPPHIRE trial, which was used as the historic comparison group. The research question was to see if non-academic physicians would achieve the same level of success as the academic investigators used in the clinical study (the hypothesis was that they would.) The registry was conducted because of concerns by FDA and the Centers for Medicare and Medicaid Services (CMS) that the device worked only safely and effectively in the hands of experienced clinicians. The study involved 74 sites and 1493 patients. The large number of sites and subjects are characteristic of registry studies. 5. Registries, p38.
[3] Selecting subjects and comparison groups
The target population consists of all the patients with a common disease or condition or a common exposure. For example, the target population might be all people with cataracts, all women with urinary incontinence, or all people who have been exposed to radiation for cancer treatment. Then broad inclusion/exclusion criteria are used to select a representative population of patients. You want to keep the inclusion and exclusion as broad as possible so that the final data will be applicable to the general population.
Selecting comparison groups may be trickier in observational studies than in clinical trials because subjects have a choice as to which intervention they receive. (In theory the patient has a choice between all possible treatments, but not all clinicians offer all treatment options a patient must "doctor-shop" if they really want a particular treatment.) Treatment bias is the notion that, given a choice between your new technology and an existing technology, the sickest patients will choose you, while less-ill patients may choose the comparator. The result will be an unfair imbalance in adverse events for the new technology. Key demographic factors—such as age, lifestyle, and disease advancement—are collected and statistically applied to help correct for treatment bias.
Comparison groups may be "internal" (data collected simultaneously), "external" (data collected outside of the registry, such as a previous clinical trial, Medicare data, or billing data provided by patients), or "historical" (data collected under the registry protocol but not simultaneously). Comparison groups are essential when you want to distinguish between alternative procedures, assess the magnitude of differences, or determine the strength of associations between groups.
[4] What data should be collected?
You collect the same kind of data that you do for randomized, controlled trials. For example, you need data from patient demographics, medical history, health status, and patient identifiers (the 'personal domain'), the patient's experience with the technology or device (the 'exposure domain'), and the primary endpoints, secondary endpoints, adverse events, and technology deficiencies (the 'outcomes domain'.) In addition, you should collect information about potential confounders (say a drug being taken to treat the same condition as the study device.) And of course, the collected data should relate directly to the hypothesis.
'Data elements' refers to the exact data that will be collected. Sometimes there are broadly accepted sets of standard data elements for a disease or condition. Look to the specialty societies to see if they have created clinical data standards that you can use as a guide for selecting data for collection. For example, the American College of Cardiology has created clinical data standards for acute coronary syndromes, heart failure, and atrial fibrillation. 5. Registries, p53. Whenever possible, tie your data elements to established terminology, such as Current Procedural Terminology (CPT) codes, International Classification of Disease (ICD-10), or events related to device deficiencies. 6. ISO 19218-1.
[5] Data Sources for Registries
Contrary to popular belief, registry data is not de-identified. Depending on the data sources, registries may use certain personal identifiers to locate specific patients and link the data to other sources. For example, Social Security numbers (SSN) can be used to identify individuals in the National Death Index (NDI). What peaks my interest is that data may come from many different sources: outpatient clinic records, inpatient hospital records, laboratory records, billing records, and even payer claims data! Data may come from medical chart abstraction, electronic medical records, institutional or organizational databases, administrative databases, death and birth records, census databases, or other registry databases. For example, if you are developing a thermoembolization technology for treating liver cancer, you may want to access data from the Registry of Liver Diseases.
[6] Ethics, Data Ownership, and Privacy
The principles of ethics, data ownership and privacy are the same for registry studies as they are for clinical trials. You need IRB approval to conduct the study, HIPAA waiver to access patient medical records, a financial agreement with the institution regarding payments, data ownership and publication rights, and assurances of patient privacy.
Consider the case study of the National Oncologic PET Registry, a registry developed to collect data about PET scans in cancer management with the goal of obtaining expanded CMS coverage for PET scans. The registry was to be conducted at hundreds of hospitals and free-standing PET facilities. The sponsor's believed the registry was not subject to IRB approval because it was being "conducted by or subject to the approval of Department or Agency heads" for the purpose of evaluating a "public benefits or services program." CMS agreed. One week before starting operation the Office of Human Research Protections (OHRP) issued a letter of disagreement. The study was put on hold while the sponsors contemplated the difficulty of obtaining approval from hundreds of IRBs. Ultimately OHRP conceded that the registry only needed to be approved by one IRB. 5. Registries, p84.
[7] Recruitment
Recruitment of sites becomes a major issue in studies the breadth of registries. Sites should be paid fair-market value for their time and must see a benefit to their operations if they are to join and actively participate in a registry. This is especially true if the registry study is to include community physicians or high-volume specialty centers, as well as academic centers. Community physicians are more likely to participate if the registry is viewed as a scientific endeavor, is endorsed by leading organizations, led by a respected opinion-leader, provides useful self-assessment data to the physician, or helps meet other physician needs such as maintenance of certification, credentialing, or pay-for-performance programs.
Patient recruitment presents the same challenges as clinical studies. The best success comes from recruitment by the patient's own physician. It also helps to communicate that registry participation may help improve care for future patients, to provide written materials in language easily understood by the lay public, keep survey forms short and simple, and provide incentives such as newsletters, reports, and modest monetary compensation.
[8] Data collection and quality assurance
Three sets of documents, together, form the system for data collection. The first is the case report forms, be they paper or electronic. These are the forms whereby data is gathered in the field, entered into coded database fields, and transmitted to a data management center. The second is a data dictionary which contains a detailed description of each variable used in the registry. For example, the question may be: "Do you smoke?" And smoking may be defined as having smoked tobacco within the last year. The third is the set of data validation rules. These are logical checks on data entered to look for inconsistencies such as males taking birth control pills.
A data management manual should be developed to define how missing data will be handled, how invalid entries will be handled, how data will be cleaned, and what level of error will be accepted. The manual should describe how data will be tracked and coded, how query reports will be generated and resolved, and how it will be stored and secured. Finally, the data management manual should describe a quality assurance system for data entry and registry procedures.
[9] Monitoring
Don't confuse observational with not-monitored. It is a common mistake to think that registry studies don't need to be monitored. You should develop a monitoring plan for a registry study the same way you would for a clinical study. At the very least you want to verify that: 1) the subject exists, 2) has the disease or condition under study, 3) met inclusion and exclusion criteria, 4) signed an informed consent and HIPAA authorization, 5) received or declined treatment. Beyond that you may develop an on-site monitoring scheme or a risk-based monitoring scheme based on triggers from electronic data. Most importantly, don't just ignore the issue, develop a monitoring plan that you can defend.
[10] Adverse event reporting
For device and device procedure registries, adverse event detection, collection, and reporting is the same as adverse event reporting for any other post-approval setting. It begins with the "becoming aware" principle; i.e. the clock for reporting adverse events starts at the moment the investigator becomes aware of symptoms or events reported by the patient or signs such as out-of-range laboratory results reported by a lab, or from the moment the manufacturer learns of an event from an investigator or subject.
Investigators (i.e. users) must report serious injuries to manufacturers and to FDA within 10 days. Investigators are responsible to report deaths to both the manufacturer and FDA as soon as possible but within 10 days. 7. 21 CFR Part 803. Interestingly, if an adverse event occurs with a comparator device the investigator must report the event to the comparator's manufacturer. Manufacturers have 30 days to report deaths, serious injuries and malfunctions to FDA, and 5 days to report events that require remedial action to prevent an unreasonable risk of substantial harm to the public health. Events are logged into the Manufacturer and User Facility Device Experience Database (MAUDE).
[11] Analysis n Interpretation
Statistical analysis of registry data is no different than statistical analysis of clinical data. There are a couple of points that deserve mentioning, though. First, you'll need to determine how closely the actual study population represents the target population. Second, there should exist a statistical analysis plan for how the data are to be analyzed and interpreted with regard to accepting or refuting the hypothesis (i.e. with regard to device success or failure.) And third, there should exist a plan for how to handle missing data.
Conclusion
Don't be misled, registry studies are not cheap, second-rate clinical trials. They should be designed with the same respect as the exalted RCT. What they are is different. They are observational studies that assess an approved technology's ability to achieve its intended use in the real world and are usually conducted for the purpose of obtaining a robust Medicare National Coverage Decision.
References
1. Extracorporeal photopheresis, CMS National Coverage Determination. [http://www.cms.gov/medicare-coverage-database/details/ncd-details.aspx?NCDId=113&ncdver=2&bc=AAAAQAAAAAAA&]
2. NPWT, CMS National Coverage Determination on Negative Pressure Wound Therapy. [http://www.cms.gov/medicare-coverage-database/license/cpt-license.aspx?from=~/overview-and-quick-search.aspx&npage=/medicare-coverage-database/details/lcd-details.aspx&LCDId=27025&ContrId=138&ver=12&ContrVer=1&Date=&DocID=L27025&bc=iAAAAAgAAAAA&%3f]
3. Ingestible pH and Pressure Capsule, CPT code. [https://www.bcbsmt.com/medicalpolicies/Policies/Ingestible%20pH%20and%20Pressure%20Capsule%20to%20Evaluate% 20Gastroparesis.aspx]
4. CMS Coverage Pages, email updates, http://www.cms.gov/.
5. Registries for Evaluating Patient Outcomes: A User's Guide, Second Edition. Agency for Healthcare Research and Quality, 2010.
6. ISO/DTS 19218-1 Medical devices—Hierarchical coding for adverse events—Event type codes (2010).
7. 21 CFR 803.20(b)(1) and 803.30(a)(1)Medical Device Reporting.
Best Regards,
Nancy J Stark, PhD
President, Clinical Device Group Inc
Comments