CASE STUDIES

SITE VERSUS CENTRAL DIFFERENCES – A POSSIBLE TECHNOLOGY SOLUTION

Scroll

Introduction

Blinded Independent Central Review (BICR), also known as a Central Review or an Independent Review Committee (IRC), is the process by which radiographic exams and selected clinical data obtained as part of a clinical trial protocol are submitted to a central location for review by independent physicians who are not involved in the clinical study. Regulatory authorities have recommended BICR of all subjects (or a cohort of subjects) for oncology registration studies when the primary study endpoint is based on tumor measurements, such as Progression-Free Survival (PFS), Time To Progression (TTP) or Objective Response Rate (ORR)1. Clinical trial sponsors have also used BICR in Phase I and II studies to assist in critical pathway decisions, including in licensing of compounds.

There are different BICR review paradigms that are employed; however current FDA guidance recommends that multiple independent reviewers evaluate each subject. CENTRAL reviews are also generally blinded to the clinical circumstances surrounding a subject2, and the reviews are retrospectively performed. As such, the results of the CENTRAL review are intended purely for statistical analysis and are temporally unrelated to the clinical care of the patient. In contrast, SITE reviewers are contributing to patient care, and as a consequence, are expected to assess the images in light of all the clinical information that can be ascertained at the time of their review.

Treatment decisions are based on the investigators’ real-time determination of response. Therefore, one consequence of a CENTRAL review is the potential for differences on a per subject basis between the SITE and CENTRAL review outcomes.

These SITE vs. CENTRAL differences raise concern among sponsors and regulators because the reasons for discordance are poorly understood,

they are typically not explained in detail and there are few published metrics addressing these differences. However, given the potential differences in the data used for the review (in part due to blinding of the CENTRAL review), as well as specific workflow differences, SITE reviews and CENTRAL reviews should be considered independent datasets and not be considered discordant.

The purpose of this white paper is to highlight the issues relating to SITE vs. CENTRAL differences and propose a potential technology solution for further evaluation.

Site Reviews

Investigators are contracted by pharma and biotech companies (sponsors) to enroll subjects in clinical trials to evaluate the treatment under consideration. As part of the sponsor’s responsibility for oversight of conducting clinical research, they provide training to the investigators and their staff on all aspects of the clinical protocol. Investigators are required to review the protocol, as well as legally acknowledge that they both understand the requirements of the study and have the capabilities to successfully complete all aspects of the clinical protocol.

In oncology trials, investigators send their patients to radiology facilities for protocol required imaging exams as a mechanism for assessing treatment efficacy and making clinical decisions about the status of their patient. However, imaging facilities are generally not aware that these patients are also subjects enrolled in a clinical study. As a result, imaging protocols on clinical trial subjects may be performed and interpreted at the SITES according to the American College of Radiology (ACR) Guidelines and/or local standards of care, but not necessarily according to the protocol-specific imaging guidelines.

Unlike the investigators, clinical SITE radiologists generally do not receive training on the protocol or the specific response criteria such as the Response Evaluation Criteria in Solid Tumors (RECIST), International Working Group Criteria for Lymphoma (IWG) and Radiographic Assessment in Neuro-oncology (RANO) that are used for response evaluation by that clinical protocol.

In addition, SITE radiology workflows may not be optimized for a clinical trial review. For example, radiologists typically rotate through various functions within a clinical department and depending on the level of specialization, different radiologists may interpret different portions of the exams within a time point (i.e. the chest radiologist reads the chest CT exams whereas another radiologist reads the abdomen and pelvis CT and still another may read the neuro studies).

Furthermore, different radiologists may read the same type of exam from the same subjects at subsequent time points. Generally, SITE radiologists are often unaware of the subject enrollment date, and therefore, do not know which exam serves as the subject’s baseline (from which to judge response) or which exam constitutes the nadir (to judge progression). Reports generated by SITE reviewers, while typically accurate, are often clinically oriented and do not necessarily contain the information required for the investigators to complete the protocol-specific Case Report Forms (CRF). Image analysis platforms and software also differ between radiology practices. Additionally, the image data from a SITE review is not accumulated into a study-specific repository or dedicated image database for long-term storage, data mining and ease of regulatory review.

Central Reviews

The workflow process is more tightly regulated and standardized with CENTRAL review. All CENTRAL reviewers are trained on the protocol, as well as the Image Review Charter (IRC) that identifies the protocol-specific response criteria.

CENTRAL reviewers are even tested on the specifics of the protocol to demonstrate competency. Additionally, mock reads using test cases will often be performed to be certain the CENTRAL reviewers understand the disease process and the functionality of the image analysis software.

CENTRAL reviewers are familiar with the review process, as well as the data handling procedures. Radiology SITES are pre-qualified prior to subject enrollment to demonstrate technical competency in following the protocol-specific imaging guidelines.

Images are usually sent from the SITES to the CENTRAL facility by a specific transfer methodology, typically secure internet transfer. Images are quality reviewed by on-site technicians to check that the study-specific imaging protocol guidelines were addressed. Image quality issues are prospectively addressed by interaction with the sites. This Image Quality Assurance (IQA) limits missing or incorrect data, and therefore helps maintain the statistical power in a study by capturing all required data points.

CENTRAL reviewer pools are limited to a small number of readers (usually 6-10 depending on the size of the trial) and are qualified by a number of factors, including specialty, prior experience and training or familiarity with the specific disease process. All images are read on the same image analysis platform according to specific response criteria.

Response outcomes are derived electronically (as opposed to manually) based on the raw data from the radiologist reviews, which include edit checks. The computer-generated outcome derivations help to decrease variability of assessments between readers by eliminating manual errors in response determination.

In contrast to SITE reviews, all subject images are read by the same reviewer across the entire time while a subject is on-study and each subject is read completely by two different radiologists to independently determine the outcome. Specific outcome variables are compared across the readers to be certain there is agreement. Any differences are adjudicated by a third reader. Reader performance is monitored and the entire process is governed by an overarching Quality Assurance program.

Case Study: Herceptin

The first use of BICR in oncology was a case study of the FDA approval of Herceptin based on TTP. This was discussed by Steven Shak, M.D., then Director of Medical Affairs of Genentech, at a DIA meeting on April 29, 1999 in Philadelphia. The essential features of the CENTRAL review as described by Dr. Shak were that the reviewers were completely independent and not subject to input from Genentech. The review teams were composed of an oncologist and a radiologist and only utilized objective tumor data including images and photos. Reviewers were blinded to treatment assignment, study endpoint and entry criteria for open label extension. Readers were excluded from reading their own institution cases and there was no contact between readers and the investigator. The investigative sites were trained and monitored. Scheduled and unscheduled radiologic exams were tracked, received and real-time Image Quality Assurance (IQA) was performed. Ten percent of cases were randomly selected for re-reading by a second reading team. There were numerous differences in the timing of progression, timing of response and type of response. Further details with respect to differences between the SITE and CENTRAL assessments were not provided as a part of this presentation and were not disclosed in the FDA Approval Documents3.

Case Study: Cancer Renal Cytokine

A report by Thiesse was published in 1977 based on the Cancer Renal Cytokine Study. There were 489 subjects in the study. They performed a CENTRAL review on all subjects who were considered responders by the investigators (n=86), as well as litigious cases (n=47). The Central Review Committee consisted of two oncologists and three radiologists. All members of the committee were present concurrently and analyzed the imaging files in a “group approach”, therefore, the review was not fully blinded. Evaluation was reached by consensus and any disagreement among members was also resolved by further consensus. All image files were duplicated and masked prior to the review. As part of the review process, the Central Review Committee validated the target lesions chosen by the investigator in addition to searching for additional target lesions not initially identified by the SITE reviewers.

All radiology exams were reviewed and the responses of the Central Review Committee (CENTRAL) were compared with the investigator (SITE) assessments. A consensus decision was made as to whether the CENTRAL results were concordant or discordant with the SITE results. There were major disagreements in 43 percent of the cases. A major disagreement was considered CR (Complete Response) or PR (Partial Response) vs. SD (Stable Disease) or PD (Progressive Disease) and PR or SD vs. PD. There were minor disagreements in 8 percent (n=10). Of the 86 subjects classified as responders by the investigators (SITE), only 66 were confirmed by the Independent Review Committee (CENTRAL). As a result, the response rate dropped by 23 percent. The reasons for disagreement between the SITE and CENTRAL review were classified into three groups: errors in measurement (n=33), errors in the selection of measurable target lesions (n=31) and miscellaneous errors (such as mistaking intercurrent disease for tumors as well as technical radiology issues rendering exams not readable by the CENTRAL review). They concluded that response rates were dramatically modified by the work of the Central Review Committee.4

Case Study: Topotecan Trial in Ovarian Cancer

A report published by Gwyther in the Annals of Oncology in 1997 described the experience with Independent Radiologic Review (CENTRAL) during a Topotecan Trial in Ovarian Cancer. There were 111 subjects with advanced epithelial ovarian cancer who relapsed with measurable disease and were subsequently enrolled on the clinical trial. Ninety-three subjects were considered eligible per the study protocol. Extent of disease was evaluated by CT, ultrasound and CXR. Scans from all claimed responders (n=24) were reviewed retrospectively by a CENTRAL peer group, which included all of the investigators who had participated in the trial, the site radiologists that interpreted the images and a radiologist outside of the study. The investigator presented the case and the lesions were re-measured by the radiologists. In this instance the radiologist knew the lesions selected by the sites and had additional clinical history based on the presentation of the investigator. Therefore, while this review was performed centrally, it was not fully blinded.

The peer group then discussed the case, which led to a final response classification. Of the 24 subjects reviewed, 14/24 responders (58 percent) were confirmed by the Independent Review Committee. Six subjects were changed from PR to SD, three subjects changed from PR to PD and one subject was deemed not eligible as that subject did not have a measurable lesion at baseline. The change in response categories in nine of the ten (9/10) subjects were based on the fact that upon re-measurement, the lesions did not decrease in size by 50 percent as specified in the protocol-defined response criteria. One of ten (1/10) rejected responses was based on the fact the subject was not eligible for inclusion.

The conclusion of the publication was that the Central Review Committee (CENTRAL) enabled rigorous and consistent application of response criteria. Based on the Central Review Committee (CENTRAL), the response rate decreased from 25.8 percent to 15.2 percent, however, this was thought to represent a more objective assessment. Their suggestion was that ideally, all subjects, not just responders, should be reviewed to ensure that responders are not being overlooked.5

Minimizing Differences

There have been additional publications/presentations that have reported on SITE vs. CENTRAL differences6-9, and all indicate that subject-level differences are to be expected. Nonetheless, these differences cause concern to regulators and sponsors, and therefore, processes should be implemented to minimize these differences. Where subject-level differences do occur, the reasons should be understood and explained, particularly if and when regulatory agencies express interest.

Mechanisms to minimize SITE vs. CENTRAL differences include having investigators and sponsors contract directly with a single radiologist as part of a radiology facility that can provide oversight and represent a single point of contact. This radiologist should review and approve the study-specific imaging guidelines as well as institute those guidelines within their radiology facility’s defined imaging protocols. This radiologist should also be provided with the same training the investigators receive at study start-up. There should be a mechanism to notify this radiologist when a trial patient is scheduled for an exam to assure the imaging study is performed according to the protocol-defined criteria. Additionally, this radiologist(s) should review all of the images performed on a subject while enrolled on a trial. Having a contracted radiologist as a point of contact who also provides oversight at their radiology facility will help to provide the continuity demanded by clinical trial imaging.

Additional mechanisms to minimize SITE vs. CENTRAL differences include real-time CENTRAL eligibility reviews (to insure subjects meet eligibility criteria at the time of enrollment) and real-time CENTRAL confirmation of progression reviews (to minimize subjects being taken off study prior to protocol-defined progression). This will likely become more important with immunotherapy trials in an effort to exclude pseudoprogression and therefore minimize effects of informative censoring.

A Technology Solution

The pharmaceutical and biotechnology industries should consider leveraging technology in an attempt to minimize, monitor and explain these SITE vs. CENTRAL differences. Radiant Sage has created Corelab-in-a-Box (CLIB), which is a technology platform that may provide sponsors with a solution. CLIB can be deployed as a web-based client to each investigator site and/or radiology facility for SITE review. Additionally, the same platform can be deployed in the setting of a CENTRAL review facility. This allows SITE reads and CENTRAL reads to be performed concurrently on the same subject using the same image analysis platform with the SITE and CENTRAL results available in a single unified database for direct comparison and future storage.

CLIB also has training modules that are developed for investigators, radiologists, study personnel and clinical research associates. These training modules are configurable and cover the clinical trial protocol, the response criteria, imaging guidelines, site qualification process, image transfer, and any other aspects of the trial for which training would be useful. The program requires completion of the training modules before the site is qualified to enroll subjects.

CLIB utilizes a protocol-specific portal with secure image transfer, followed by real-time IQA CENTRALLY to document and report image acceptance or failure. As such, sites are then able to resolve any imaging issues in near real time. The portal allows review and resolution of IQA issues as well as complete tracking of scans, thus minimizing differences between SITE and CENTRAL image databases. This process as well as both the SITE and CENTRAL reading process is managed through the Radiant Sage RadVista Operations Manager Tool.

CLIB deploys a web-based client image analysis platform directly to the site and maintains a CENTRAL review instance so that all scans are read both locally at the SITE and centrally (CENTRAL). As the same image analysis tools are used, this allows the same edit checks and derivation procedures to eliminate manual errors, thus improving consistency in outcomes. The ability to deploy as a client also allows regionally-based experts to act as SITE reviewers, thereby extending physician expertise to sites that may not have staff with the required experience. This technology also allows CENTRAL reviewers to be geographically and temporally diverse. While both reviews are done independently, and the CENTRAL review remains blinded, the outcomes determined by the SITE can be directly compared to the CENTRAL outcomes within the same system.

If needed, adjudication of differences by a third reader can occur, as the data produced CENTRALLY and by the SITE (including the annotation files created by the SITE radiologist) are available for review by the adjudicator. In this adjudication paradigm, images are still reviewed by two readers, (a SITE and a CENTRAL reviewer), however, the adjudicated outcome also resolves and/or explains the SITE vs. CENTRAL differences. Monitoring of the rate of those SITE vs. CENTRAL differences can be a mechanism for implementation of risk-based monitoring of the primary outcome variables. For example, if a site’s adjudication results are accepted less frequently than their peers, this may indicate their lack of understanding of the protocol or response criteria. This allows a mechanism for identifying the need for focused training and subsequent continuous monitoring.

Images that are rapidly uploaded to the portal can also be used for real-time SITE and CENTRAL eligibility review to insure that studies are powered accordingly. Similarly, real-time confirmation of PD can be used to minimize the number of subjects who are taken off protocol prior to a true progression event to which minimizes informative censoring. This is particularly important in the era of immunotherapies, where subjects may have pseudoprogression and a delayed response to therapy. Additionally, all images as well as their SITE and CENTRAL annotations, will be maintained in a single image database for regulatory review.

CLIB technology virtually incorporates all aspects of the SITE and CENTRAL review process virtually and can accommodate a “dashboard” review. This high-level review includes operational components such as but not limited to scans received, unresolved IQA issues, and number of scans read. It could also be configured to compare SITE/CENTRAL outcome differences, including the presence of measurable disease at baseline, the number of target lesions chosen at baseline, presence or absence of brain metastases at baseline and rate of change in the target lesions. This could also be used as a mechanism for monitoring.

SITE vs. CENTRAL differences will exist based on workflow and data content differences. These differences concern sponsors and regulators. Procedural mechanisms and technology should be leveraged to minimize the differences. Where differences exist, they should be understood and explained. CLIB may represent a technology solution companies should fully evaluate to address the differences between SITE and CENTRAL reviews.

Biobliography

United States Food and Drug Administration Guidance for Industry: Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics. Rockville, MD: US Department of Health and Human Services; 2007.

United States Food and Drug Administration Guidance for Industry: Developing Medical Imaging Drug and Biologic Products Part Three: Design, Analysis and Interpretation of Clinical Studies. Rockville, MD: US Department of Health and Human Services; 2004.

FDA Review of BLA 98-0369 @accessdata.fda.gov. Accessed on 11 March 2016

Thiesse, JCO, Vol. 15, No. 12, Dec 197, pp.3507-3514

Gwyther, Annals of Oncology, Vol. 8, 1997, pp. 463-468

Miller, Presentation on Phase III Trial of capecitibine plus bevacizumab vs. Capecitabine Alone in Women with Metastatic Breast Cancer (MBC) Previously Treated with an Anthracycline and a Taxane, San Antonio Breast Cancer Symposium, Dec 2002, San Antonio, TX.

Rothenberg, JCO, Vol. 21, No 11, June 2003, pp. 2059-2069

Baumann, Data Discrepancy in Chemotherapy Trials for Pancreatic Cancer, ASCO 2007

Borradaile, Analysis of the Rate of Missing Data, the Rate of Discordance Between Readers, and the Rate of Site versus Ventral Discordance in Clinical Studies of Recently Approved Breast Cancer Agents That Have Used Blinded Independent Central Review, San Antonio Breast Conference 2008

Connect with us