HB Ad Slot
HB Mobile Ad Slot
Unpacking Averages: FDA Review Time for 510(k)s
Monday, October 4, 2021

In this column, in the coming months we are going to dig into the data regarding FDA regulation of medical products, deeper than the averages that FDA publishes in connection with its user fee obligations.  For many averages, there’s a high degree of variability, and it’s important for industry to have a deeper understanding.  In each case, we will offer a few preliminary observations on the data, but we would encourage a conversation around what others see in the data.

Chart

This is an interactive chart that you can explore by clicking on the colors in the legend to see how specific therapeutic areas stack up against the average.

Methodology

We want to understand FDA’s performance generally with regard to review times associated with 510(k)s across all medical devices.  Using data available from openFDA, we selected the data for the last almost 12 years, from January 1, 2010 until September 1, 2021, based on the date FDA receives the premarket notification.  Data older than that are probably not terribly relevant.  We further filtered for traditional 510(k)s because special and abbreviated submissions have different review processes, and likewise we removed any that had received expedited review.  We then removed any product codes that had three or fewer submissions during that time.  We wanted to get rid of anything that was simply too anecdotal, too noisy.  That sorting left us with just over 25,000 submissions, and 852 pro codes used.

To calculate the review time, we used the difference between date received and date decided, although we realize that FDA has additional data that it uses to calculate its actual review time in a more nuanced way, differentiating between time at FDA and time where the clock is turned off because the manufacturer is supposed to be doing something.  We calculated averages for each individual pro code.  The x-axis in the graph is simply all of the product codes sorted by average review time from the quickest to the longest.

We wanted to add in an average, and the most natural probably would’ve been the average of the pro code averages included in the graphic.  But that ignores the fact that some pro codes have lots more products than others.  The average of the pro code averages was 176.5 days.  The average of all the 25,000 submissions was 163.5 days.  It’s apparently lower because some of the quicker pro codes apparently have more devices in them.  In the chart, we went with the simple average of submissions, as that is most akin to the data that FDA typically publishes.

Observations

We would note that we aren’t entirely sure the range of factors that drive review times.  Certainly it would seem that higher risk and complexity would be likely to lead to higher review times.  But in the years that we’ve been doing this work, those are not by themselves reliable predictors of how long a review will take.  Novelty is also important, although novelty is less of a factor in the 510(k) process because the process is based on a substantial equivalence claim.  But it’s also pretty obvious that a lot of administrative circumstances impact review times, such as high reviewer turnover in a branch.  At any rate, this data does not give us information on why certain product codes would have higher review times.  We will leave that to future inquiry.  Here we just want to tease apart the variance.

Big Picture

At each end of the graph, we see sharp nonlinear growth, presumably for what are in a sense outliers.  On the left-hand side, we have rapid acceleration from the quickest reviews of about 50 days up to about 100.  At the other side, we have a quick increase from 300 to the very top at over 500 days.  But in between those two extremes, from about a review time of 100 days to about 300, it’s a pretty steady linear climb.  That’s a bit surprising, and it reveals that really there is no such thing as an average.  There is no plateau among the review times around the mathematical average.  Indeed, we don’t see any plateaus at all.  Apparently, it really does matter what product we are talking about when trying to predict a review time.

Therapeutic Areas

Remember that in FDA’s organizational chart, reviews within the Office of Device Evaluation (“ODE”) are organized by therapeutic area.  That makes sense, as you want the same people generally with therapeutic expertise reviewing devices in that therapeutic area.  In this graph, product codes are assigned to an applicable therapeutic area.

Notice that really none of the product codes in a given therapeutic areas are extremely clustered, either low or high.  That suggests that no particular therapeutic review branch is substantially quicker than the rest.  But within that general observation, there are definitely some small clusters of review times for product codes within the different therapeutic areas.

It would actually be pretty remarkable if an organization the size of ODE could achieve uniformity of review times across all review branches.  But this is unexpectedly evenhanded.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins