Search This Blog

Sunday, May 7, 2017

A Failed Study Fails Everyone

Source: Li platform



Question asked by:

Norman Goldfarb, Editor at Journal of Clinical Research Best Practices:
A Failed Study Fails Everyone || When a study fails because not enough people enroll, the people who did enroll just wasted their time, probably endured discomfort, and possibly risked their health. While we can’t expect every study to achieve its enrollment goals, we should make a serious effort to minimize the number of failed studies. level? .



Answers/Opinions:

Anthony D. Salerno 
Great thought provoking question! Your suggestions are awesome. Id like to put the power in the hands of the sites! Devana Solutions, LLC has a way for Sponsors and CROs to identify the best sites by any clinical research performance metric in real-time using business intelligence software. With your suggestions Norman Goldfarb and our technology we could eliminate underperforming sites completely.


Jamie Kinsler

I think it tends to be molecule related.. PI interest. Agreed, a failed study that puts patients through the ringer, biopsies and such, only to "fail" is very discomforting.

Norman Goldfarb
If I were planning a study, I'd want to know what study coordinators think about recruitment. As far as the statistics go, power calculations are an art, and not an easy one, and some sponsors are probably better at it than others.

Wessam Sonbol
I wonder how much of existing data on the indication and selected sites is used. Combining data across publications and clinical trials to filter on sites that have the experience, then utilizing this information to run against claims data can provide so much power. I know that some of this is being utilized, but how deep?

Norman Goldfarb
My understanding is that most sponsors rely on KOLs, if anyone, while designing a study; only a few sponsors get the sites involved at that stage. A lot of sponsors use big databases to determine things like prevalence and in site selection.

Helen Russo, M.S.
I also think that's part of the problem. I've seen KOL sites enroll subjects that never should have been enrolled. Honestly, after 25 years KOL doesn't mean a lot to me without "back-up." Another issue I've found over the years is that monitor evaluation isn't listened to as much as it should be. I've /not/ recommended sites who've had the numbers, but weren't adequate for quality many times. I wasn't listened to, they were used again, and again they produced garbage. I take evaluating sites very seriously and I don't make negative recommendations lightly so it's really hard to not say, "told you so..."

Failed studies are expensive, both in the time and costs to rescue them and those required to redo them. A huge issue, and one the does go back to study coordinators, is screening interview questions. Too often I've seen leading questions such as, "Do you have x times/month?" instead of, "How many do you have/per month." I had one site I picked up midway in a study, a high enroller, that had a 70% treatment fail rate. I had noted that numerous subjects didn't meet study criteria. The site complained-of course-and they got their way, sort of: the subjects were allowed to stay but so was I 😉. 70% failed to treat the indication in the time specified time period. It was likely because the screening questions led to a result not conducive to enrolling proper subjects. Monitors also need to be appropriately trained in reviewing screening documents to ensure proper question structure to no elicit desired response, but an accurate one.

Larry Ajuwon
Norman, what a splendid question. Sounds like a project idea for CTTI, ISPOR, and health economists. They could help quantify the costs of failure (or negative ROI in finance speak) and spur industry action.

Jessica Herrera Rodriguez
We never take a study we can't enroll for! You can't make a living off screen fails!

John Wilson
Good question, Norm. Let's not forget the ethical implications of a subject going through a clinical trial, with multiple exposures to unproven therapeutic intervention, only to have it count for naught.

Author Yao
I think it would be interesting to investigate any potential relationship between recruitment and 1) recruitment strategy (incentives, ads platform, etc.), 2) study topic/discipline, 3) extent of participatory actions, and 4) preparedness (how many wasted studies were caused by not effecting measures such as power calculations--believe me, it happens). Sadly I wouldn't know where to collect data on failed studies, given they usually never see the light of day...

Kim Walpole
What a thought provoking post - really appreciated your points around the risks to patients as well! This is something we’re working on at Trials.ai – evaluating study protocols and their likelihood to reach enrollment goals (among other things) using artificial intelligence.

Anna Shagako (Chagako)
Careful site selection is one of the topics to consider - there are two different groups of investigators - KOLs and "trialists", and to achieve your goals in the study, including recruitment goals, you have to select the right mixture of both types of investgators. Proportions in this mixture differ from study to study, depending on many factors - from TA to complexity. And, if we speak about international clinical trials do not forget to check the recruitment history for this indication for different countries.

Jennifer Clauson
I would love to see tighter collaboration between sites and sponsors at the study design planning stage.
It's a difficult and complex balance to design eligibility criteria such that the participants are the best group to test the drug from a scientific and clinical point of view, AND be wide enough to enroll human beings in the real world. One PI and I were reviewing a synopsis for a drug she was very interested in, a novel mechanism of action for an indication with few treatment options. As we dug through the I/E, it was looking like we had a good population to recruit from...and then we hit the ineligibility criteria of OSA. I asked her how many of her patient pool had a Dx of OSA and she said "most of them". Then she remarked "they are looking for a needle in a haystack- this study will never enroll." If we could get a dialogue between sites and sponsors to hone in on practical issues like this, I believe we could cut way down on wasted effort, time, money and frustration.

Norman Goldfarb
Amen! and Amen!

Helen Russo, M.S.
I've looked at many protocols that are written in such a way where I knew they'd be impossible to enroll (and were). When I learned to write protocols I was taught your inclusion criteria should outnumber exclusion, however in practice I've rarely found this to be the case. Also, sometime I/E are written so vaguely that you end up with more questions as to if a potential subject does qualify. It's a fine balance, surely, to find the appropriate subjects without sacrificing the data or introducing confounding variables.

Jennifer Clauson
Helen Russo, M.S. I also have seen vague criteria - so it seems to me that getting operational people who are out in the weeds enrolling to weigh in during the protocol development phase. It would avoid so much lost time and aggravation- and most importantly, fewer failed studies.

Juhi Chawda
Failed Studies are expensive,both in the terms of time and money. failure and success depends on study design n ethical implications. Maybe these criteria helps to another sub-points like : Suitable Site with enough patient pool, recruitment strategy, collaboration between site and sponsor etc.

Lynda Cedar, Visionary, Early-stage Clinical trials
A site that is qualified for this kind of "special" studies, as those that require a know-how acquired by experience of the ground should be able of gauging an execution of protocol on the ground, allowing the sponsor to plan the budget and the milestones. If this site (or few) says that it plans a difficulty of participants' recruitment in the study, the alternative would be to plan a participation of some sites in the recruitment in a pilot project, then (1) to select the most successful site or (2) you have only the choice of a multi-center or even international study managed by the site of your choice .... at this stage, you would certainly have in mind your real KOL. This strategy helps sponsor to plan based on real data and the true challenges. It is not popular for reasons of costs and time but it’s better than to end of being in the situation described by Norman Goldfarb above.

Charles H Pierce, MD, PhD, FCP, CPI
I agree that it is more t han just important and valuable for the entire Clinical Reaearch community to know about and be aware of issues and problems that face our "profession". Thank you for your tireless work in this area.. Best regards, Charles

Lynda Cedar, Visionary, Early-stage Clinical trials
How can we still end in the fact that in ........ we should make a serious effort to minimize the number of failed studies > >? we must make the serious effort to ..... ". The clinical research is not white or black we know that the studies are not expected, all of them to pass the clinical test. Otherwise, probably no medication would have remained to find/discover.