Jump to ContentJump to Main Navigation
Pharmaceutical Economics and PolicyPerspectives, Promises, and Problems$

Stuart O. Schweitzer and Z. John Lu

Print publication date: 2018

Print ISBN-13: 9780190623784

Published to Oxford Scholarship Online: May 2018

DOI: 10.1093/oso/9780190623784.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 25 May 2020

Drug Approval Process in the United States

Drug Approval Process in the United States

Chapter:
(p.323) 12 Drug Approval Process in the United States
Source:
Pharmaceutical Economics and Policy
Author(s):

Stuart O. Schweitzer

Z. John Lu

Publisher:
Oxford University Press
DOI:10.1093/oso/9780190623784.003.0013

Abstract and Keywords

The drug approval process in any country involves a balancing of conflicting social objectives: safety and access. Faster approval leads to quicker access to potentially life-saving medicine, yet could also lead to false positives or, worse, unsafe products on the market. The United States has a widely respected but stringent and rigorous review process overseen by the Food and Drug Administration. This chapter performs an in-depth analysis of the pharmaceutical regulatory approval process in the United States. Standards, guidelines, and critical milestones for basic research, animal testing, and clinical trials in the drug R&D process are explained. It highlights major drug legislation since the beginning of the twentieth century and how this legislation has helped the FDA become the gold standard in pharmaceutical regulation worldwide. The registration pathways for generics and biosimilars are also discussed.

Keywords:   FDA, clinical trials, New Drug Application, post-approval research, cost

The drug approval process in any country involves a balancing of conflicting social objectives: safety and access. Ideally, medicine first of all should never do harm, but this goal is largely unattainable. The best we can hope for is that side effects are worth the drug’s benefits. The process of evaluating safety and efficacy takes time and requires a sufficiently large number of animal and human subjects in testing. But for many patients who are in urgent need of a potentially life-saving medicine, a delay in approval often means a lost opportunity for a cure, or prolonged suffering. To the manufacturer, a long review and approval process also represents lost earnings from the drug. Thus the drug approval process in any country will inevitably be criticized by those who would like to shift priorities one way or the other (New York Times 2004; Wall Street Journal 2016).

The United States has a widely respected but stringent and rigorous review process overseen by the Food and Drug Administration, which is part of the Department of Health and Human Services. The FDA sets guidelines for basic research and animal testing that must be done before human tests can begin. Once human tests are allowed, they take place in progressively larger phases so that unsafe products will be recognized before larger numbers of subjects are exposed. (p.324) The process has worked well generally, but criticism of the delay has led to numerous reforms that have largely been effective in reducing the approval time for new drugs. Nevertheless, the success rate of obtaining FDA approval once a potential compound enters the stage of human testing is low, with the most recent estimate based on over 800 firms (including many very small start-up companies) and almost 4,500 compounds putting the likelihood at merely 10 percent for all indications and 15 percent for first indications (Hay et al. 2014).

The drug approval processes in the EU, Japan, Canada, and other advanced economies are quite similar, thanks to the standardization efforts by the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH). Less developed economies may find it more challenging to implement the sophisticated ICH standards and criteria, due to a lack of both infrastructure and financial resources.

In this chapter, we will perform an in-depth analysis of the pharmaceutical regulatory approval process in the United States. We will highlight major drug legislation since the beginning of the twentieth century, and how that legislation has helped the FDA become the gold standard in pharmaceutical regulation worldwide. The registration pathways for generics and biosimilars will also be discussed. We will turn our attention to the European regulatory system in the next chapter.

The Drug Approval Process in the United States

Upon completion of toxicology and safety testing in animals, drug companies are required to file an Investigational New Drug application with the FDA prior to any human testing. The purpose of an IND application is to demonstrate to the regulatory agency that the product under investigation will not present any unreasonable risks to humans, based on all existing information. An IND application contains the drug sponsor’s research plans, details of manufacturing processes, and the results of laboratory and animal tests to date (21 CFR 312.23). The clinical section contains a detailed description, or protocol, of the initially planned clinical trials and a general overview of the studies that will follow. The manufacturing section describes the facilities, equipment, and techniques the sponsor will use to produce the drug.

Unless the FDA disapproves the IND application within a thirty-day period, it is automatically approved and clinical tests can begin. The FDA will issue a hold on the clinical program if it finds any problem or issue with the IND document needing further clarification before the drug can be administered to humans.

For administrative purposes and as a means of prioritizing its work, the FDA rates each drug for which an IND is received according to the drug’s novelty and (p.325) the agency’s subjective judgment of the drug’s therapeutic potential. Until 1992, the FDA used a five-category rating system of therapeutic importance: “A” for important therapeutic gain; “B” for modest therapeutic gain; “C” for little or no therapeutic gain; “AA” for AIDS drugs; and “V” for a designated orphan drug (Reekie 1978, table 1). Beginning in 1992 the rating scheme was changed to include only two categories, “P” (priority) for the most important drugs and “S” (standard) for all other drugs.

Phase I Clinical Trials

Phase I studies are small trials (generally fewer than 100 participants), usually involving only healthy volunteers, to assess safety by determining how the body absorbs and eliminates the drugs and to document the response they produce.1 At this point the drug company must once again decide whether to continue with the project. Based on information accumulated until this point, the company tries to determine whether the product is of sufficient promise to warrant the investment of the necessary resources to market the product. In addition, the company will try to estimate the manufacturing costs of the drug (Faust 1971). This will often be the first attempt to estimate these costs because it is the first stage in which an accurate description of the actual drug product is available.

During Phase I trials, the FDA can impose a clinical hold, wherein it does not allow the study to begin or stops a trial that has started, for reasons of safety or because of a sponsor’s failure to accurately disclose the risk of study to investigators. Sponsors can request the release of a clinical hold by submitting an official response addressing all pertinent issues identified in the clinical hold (HHS 2003). The FDA then has thirty days to respond to such requests. Although the FDA routinely provides advice in such cases, investigators may choose to ignore any advice regarding the design of Phase I studies in areas other than patient safety.

The clinician, at this point, is instrumental in reevaluating the number of patients necessary for the clinical tests, the length of time that each testing stage will require, the specific information that must be developed to prove efficacy for the compound, the likelihood that such information can be developed, and the overall and yearly expenditure estimates for the project (Wiggins 1981).

On average, a Phase I research program from start to finish takes about twenty months to complete (DiMasi, Grabowski, and Hansen 2016). Once Phase I trials are completed, the likelihood of an investigational product moving on to the next phase of clinical trials is high. According to a recent study, two out of three investigational products (65 percent) completing Phase I advanced to Phase II (Hay et al. 2014).

(p.326) Phase II Clinical Trials

Phase II trials are designed to obtain preliminary data on the efficacy of the drug for a particular indication or indications in patients with the disease or condition. This phase of testing also helps determine the common short-term side effects and risks associated with the drug. Other variables often examined in Phase II include the drug’s dose-response relationship and how various patient groups (e.g., men versus women) may respond to the drug differently. Phase II studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving several hundred people. Various doses of the drug are compared so that the future large Phase III trial(s) can test the most successful dosage. Under the 1962 FDA Amendments, substantial evidence of efficacy in the intended use of the drug is required before marketing approval can be granted. The FDA does not specify the trial design, but the design that is most likely to show significant effect of the drug is to compare it to an inert substance (a placebo). Though studies comparing the experimental drug to other drugs on the market are possible, they entail more risk for the sponsor (for example, if the experimental drug is less effective than the already marketed drug), and without a placebo control group efficacy will still not be determined. Therefore, most FDA trials are limited to the experimental drug compared to a placebo. The other aspect of study design is that studies should be blind to reduce the likelihood of bias. A study is single-blind if the patient does not know whether he or she is receiving the active drug or the placebo. A study is double-blind if even the investigator does not know which patients are receiving the experimental drug. Therefore, most high-quality clinical trials are randomized, double-blind, placebo-controlled studies (Meinert 1986).

Phase 2 trials usually enroll only patients for whom the drug is intended to be of benefit. When successful, Phase II trials usually provide the first significant evidence of efficacy. Additional safety data are also obtained during this phase.

During Phase II clinical trials, corporate management will take a great deal of interest in drugs under development and request detailed information, including the drug’s expected manufacturing cost, price, expected sales at different prices, estimated length of time for the drug to reach market, and the development costs that are likely to be incurred through the marketing stage (Wiggins 1981). This updated information is used to make better decisions about which drugs should be candidates for further testing and which drugs might be dropped.

It is important for management to assess the continued feasibility of the drug in order to prevent any further financial loss to the company, as on average only one out of every three (32 percent) compounds entering Phase II trials will advance to (p.327) the next phase (Hay et al. 2014). There is also evidence that this transitional probability from Phase II to Phase III depends on the therapeutic area of the intended indication for the investigational drug, with drugs for infectious diseases having the highest likelihood of advancing (46 percent) and those for cancers and cardiovascular diseases lowest (26–28 percent) (Hay et al. 2014, table 5). The average duration of Phase II is 30.5 months (DiMasi, Grabowski, and Hansen 2016).

Phase III Clinical Trials

The third and final premarketing clinical development phase involves large-scale controlled clinical trials of a drug’s safety and efficacy at the dose and duration that will subsequently be marketed. Phase III studies are considered the definitive evaluation of an investigational drug’s effect in humans, and the large sample size, typically ranging from several hundred to several thousand patients, increases the likelihood that actual benefits will be found to be statistically significant—that is, the power of test (Meinert 1986, ch. 9). Phase III studies gather precise information on the drug’s efficacy for specific indications, determine whether the drug produces a broader range and severity of adverse effects than those exhibited in the smaller study populations of Phase I and Phase II studies, and identify the best way of administering and using the drug for the purpose intended. This information forms the basis for deciding on the content of the product label and package insert if the drug is approved (OTA 1993). Thus Phase III trials are often called “registration trials.”

Most Phase III studies are randomized, placebo-controlled, and often double-blind. However, even in Phase III trials, the FDA can impose a clinical hold if a study is deemed unsafe (as in Phase I and Phase II studies), or if the protocol design is clearly deficient in meeting its stated objectives. In recent years, there has been increased attention, often required by regulatory agencies in the United States and Europe, to assess the need for modifying treatment across different demographics (e.g., by age, gender, race, etc.) and concomitant diseases (e.g., presence of renal or hepatic function abnormalities), and in evaluating drug-drug interactions during clinical trials (Temple 2002). The increased complexity of the Phase III program certainly has contributed to the rising price tag of pharmaceutical R&D (DiMasi, Grabowski, and Hansen 2016).

Debate has arisen over whether separate clinical trials should be conducted in children; however, enlisting children in clinical trials raises issues of increased risk and informed consent (Sharav 2003). Underrepresentation of minority groups in clinical trials has also become a recent focus. This issue potentially affects the generalizability and/or external validity of the trial findings, which may have (p.328) important repercussions for the safety and efficacy of the new drug in addition to reducing the opportunities for subgroup analysis (Hussain-Gambles 2003).

Hay et al. (2014, table 5) estimated that about 60 percent of the Phase III programs are successful in that the new therapy will advance to the next stage: applying for marketing approval by the FDA. As in Phase II, the success rate depends on the disease area the drug is intended for, with drugs aimed at infectious diseases enjoying the highest likelihood (two out of three advancing), while cancer and cardiovascular disease have the lowest likelihood (45 percent and 53 percent, respectively). The average duration for the Phase III program is thirty-one months (DiMasi, Grabowski, and Hansen 2016).

New Drug Application

Unless the new drug fails one or several key endpoints in the Phase III trials, or is found to have a sizable incidence of very serious or life-threatening side effects, most manufacturers will compile a New Drug Application (NDA) for submission to the FDA after the Phase III program in order to request permission to market the new drug. The NDA is a very complex document that summarizes all results of all studies in animals and humans involving the drug. The drug’s pharmacokinetic and pharmacodynamic properties and manufacturing information (including quality specifications) are also included in the NDA, as well as the proposed labeling should the product be approved. Most NDAs use a format and language recommended by the FDA (Marroum and Gobburu 2002).

The FDA has two sub-agencies that regulate and monitor the pharmaceutical research process: the Center for Drug Evaluation and Research and the Center for Biologics Evaluation and Research. Drug sponsors seeking marketing approval for a new chemical, antibiotic, hormone, or enzyme drug product must file an NDA with CDER. Biotechnological drug products and vaccines must file two applications with CBER: a product license application (PLA) covering the drug, and an establishment license application (ELA) covering the facilities manufacturing the product.

The CDER has sixty days from the date a company submits an NDA to decide if it contains sufficient information for the agency to conduct a substantive review. The process of review is carried out by the Review Division, and the results are given to the division director. If there are any disagreements concerning the strength of scientific evidence for the drug, the NDA moves up one level to the office director for further consideration. If disagreement continues, the director of CDER will review the application and proposed FDA decision (HHS 1990).

(p.329) Once the agency reaches agreement, the Review Division director sends a letter to the company explaining its decision. The FDA can either approve the product for market (“Approval Letter”; 21 CFR 314.105) or declare that the agency would approve the drug once the company addresses its minor concerns (“Approvable Letter”; 21 CFR 314.110). Alternatively, the letter can state that the drug is not approvable (“Not Approvable Letter”; 21 CFR 314.120). The company has ten days to respond to either an approvable or not approvable letter, providing information regarding its attempt to correct the problem(s) stated in the FDA letter. If the sponsor does not respond within ten days, the FDA automatically withdraws the NDA. Most NDAs require at least one such amendment by the company.

The CBER review process for new products is similar to that of the CDER, although more emphasis is placed on the safety and quality of the processes and facilities used to produce a biological drug. Also, in contrast to CDER’s NDA review process, there are no statutory limits on the amount of time CBER reviewers may take to complete their review of PLAs and ELAs (OTA 1993).

By law, FDA must complete its review of an NDA within 180 days once the application is accepted for filing. With each amendment to the NDA, however, an additional 180 days are granted (21 CFR 314.60). Even with the extensions, actual review time often exceeds the statutory allowances. DiMasi, Grabowski, and Hansen (2016) estimate that on average it takes about sixteen months from NDA submission to approval by the FDA. The likelihood of NDA success, however, is over 83 percent and seems to be invariant across disease areas (Hay et al. 2014, table 5).

Post-Approval Research

Often, additional studies are carried out after a drug has been approved. These are called Phase IV studies. There are several objectives for conducting a Phase IV study. Sometimes Phase IV studies focus on pediatric patients. Sometimes the manufacturer intends to collect information on the effectiveness of the new drug in an actual clinical practice setting, including data on resource utilization, costs, and patient quality of life, as data from the rather artificial, well-controlled environment in clinical trials may not be sufficient for this purpose. Another reason for a Phase IV study is to collect long-term safety information about a new drug. Federal regulation requires manufacturers selling approved drugs in the United States to notify the FDA periodically about the performance of their products, so a more robust estimate of adverse events can be obtained and updated over time. If serious problems with a drug show up only when the number of patients taking it is very large, or the exposure period is long enough, or when other variables, such (p.330) as comorbidities, compliance, diet, and other medications are not well controlled as in clinical trials, valuable information on adverse events and sometimes efficacy can be learned in these additional studies. Phase IV studies conducted for this purpose are often called post-marketing surveillance studies (FDA 2016a).

In some cases, Phase IV trials may be required as a condition for FDA approval of a new drug. In the case of an investigational drug intended for a grave condition with no effective therapy, the FDA may allow fewer and smaller high-quality clinical trials in the Phase II and Phase III programs in order to expedite the testing and review process Accelerated Approval Program; FDA 2016b). In return for the more rapid (and sometimes less rigorous) approval, the FDA requires that the new drug be more thoroughly studied post-approval, particularly with respect to safety, but efficacy endpoints will also be analyzed. The rationale for this regulatory approach rests on an educated guess that, even with an increased risk for undetected side effects, earlier access to the drug will result in an improved benefit-cost ratio in favor of treatment. This process was utilized frequently in the height of the HIV/AIDS crisis in the 1990s, and it is not uncommon when evaluating novel cancer drugs.

In addition, the FDA also encourages physicians to report adverse events with marketed products through the FDA MedWatch program (FDA 2016c). However, it is a voluntary reporting program, not an actual study, and only a small fraction of adverse events are actually reported (New York Times 2004). The bulk of the post-approval safety and efficacy data are expected to come from well-designed Phase IV studies.

In reality, many Phase IV trials have been observational in nature and less rigorous than Phase II or III studies. Even more troublesome has been the agency’s lack of enforcement power when a drug manufacturer fails to complete the required post-marketing trial(s) as part of the agreement for conditional approval (New York Times 2004; Hanauer 2007; Moore and Furberg 2014). The failure to complete Phase IV commitment may be due to a deliberate lack of dedicated resource and urgency by the manufacturer, a poorly designed study with difficult-to-follow collection procedures and an insufficient follow-up algorithm, or slow enrollment caused by a lack of patient interest. The bottom line is that the FDA has no statutory authority to regulate Phase IV studies, and there are few upside (and many downside) risks for the manufacturer to continue the data collection of adverse events (Institute of Medicine 2007, ch. 8).

There have been several high-profile FDA recalls of previously popular drugs since 2004, such as rofecoxib (Vioxx; Merck) in 2004, valdecoxib (Bextra; Searle) in 2005, isotretinoin (Accutane; Roche) in 2009, and drotrecogin alfa (Xigris; Eli Lilly) in 2011 (FDA 2016f). The first three recalls were due to serious, life-threatening (p.331) side effects discovered after the drugs had been used widely for a number of years. The recall of Xigris, ten years after the initial approval, was due to lack of benefit and therefore a questionable benefit-risk profile, based on a Phase IV trial required by the European Medicines Agency (EMA) as a condition for continued market authorization.

The case of bevacizumab (Avastin; Genentech) provides another example for the importance of post-approval study. The drug was initially approved in 2008 for metastatic breast cancer patients under the FDA’s Accelerated Approval Program while two large confirmatory trials in this patient population were still under way. Once the confirmatory trials were completed, Avastin did not demonstrate any benefit in survival, tumor progression, or quality of life, but had serious, potentially life-threatening side effects, compelling the FDA to revoke its conditional approval in 2011 (FDA 2011a). These developments have no doubt provided support for critics who argue the agency shifted the benefit-risk weight pendulum too much in the direction of efficacy and speed of approval, and should either reduce the reliance on Phase IV program or obtain much broader oversight over it (McClellan 2007).

Since the early 1960s and the terrible thalidomide tragedy (Randall 1990), we have come a long way in improving drug safety while speeding access to new products. Pre-marketing drug testing is far more rigorous (and expensive) than before, and the ability of the FDA to review NDAs efficiently has improved as well. The FDA has also taken steps to afford access to new drugs before all the evidence is in concerning long-term and rare side effects, through conditional approvals, voluntary surveillance, and post-marketing Phase IV studies. This is a conscious effort to increase access if the benefits of the drug appear to outweigh the potential harm, though it does expose patients to possible risks. This trade-off is difficult to accept but is, unfortunately, inevitable. In the international context, the FDA now allows manufacturers to use some European and Japanese drug trial data to support US drug applications, thereby reducing the amount of overlapping testing.

The appropriate weighting of safety and access is complicated because the two goals conflict with each other. Safety is, of course, an important goal of a nation’s drug regulatory system, but the only way to ensure safety is to increase the cost of drug approval by lengthening the drug approval process and increasing the sample size of Phase II and III investigations. An example illustrates how difficult this strategy can be. In the 1960s diethylstilbestrol (DES) was prescribed for pregnant women to ease morning sickness. The drug appeared to be innocuous, having no visible effects on either the mother or the newborn. But when the offspring reached puberty some of them began to exhibit serious abnormalities in their reproductive system. The detective work of finding the cause of these abnormalities (p.332) among teenagers was daunting, and only after lengthy investigation was the problem tracked back to the drug that the mothers had taken all the way back during pregnancy. From a policy standpoint, the DES story was very difficult. It took twenty years to identify the adverse effects of the drug, and multiple generations of families were affected as a result. Only in hindsight can we now imagine designing clinical trials that would have detected this side effect originally.

The Rising Cost of Pharmaceutical R&D

Table 12.1 summarizes much of the information discussed so far in this chapter. Pharmaceutical R&D is a very lengthy, risky, and costly undertaking. Only about one out of every ten compounds (10.3 percent) emerged from the laboratories of pharmaceutical firms, both large and small, into the human testing stage and eventually received regulatory approval to market the product (Hay et al. 2014). The top firms, of note, by virtue of size and experience, enjoy a much higher ultimate success rate, 30.2 percent (DiMasi, Grabowski, and Hansen 2016). On average, it takes more than ten and a half years to go from bench research to FDA approval, and requires nearly $2.6 billion (in 2013 dollars) in cash outlays and capitalized cost (DiMasi, Grabowski, and Hansen 2016). R&D time incurred in the preclinical phase and Phase III are the longest, followed by Phase II. On average, it takes sixteen months from NDA filling to FDA approval, significantly longer than the statutory 180-day window (although not all or even most of the delay is necessarily attributable to the FDA).

There are four sets of cost figures in Table 12.1. It is worth noting that all cost figures are taken from a recent study including only the largest (and unidentified) pharmaceutical firms, so the estimates may conceivably be on the higher end. The expected direct cost per investigational drug is the actual cash outlays by the manufacturer on R&D activities for a candidate drug, adjusted by the likelihood of it successfully completing this phase and advancing into the next (as not all will be successful). Because the R&D process is so lengthy and the cost of capital is expensive, the already large direct expenditure expands to a huge capitalized cost per investigational drug when the cost of borrowing (based on the prevailing rate of return on capital over the duration of R&D) is included in the total tally. The resulting total capitalized cost, $374.7 million (in 2005 dollars), according to DiMasi and Grabowski (2007), represents the fully loaded cost of ushering an investigational drug through the entire R&D process, from basic research to NDA filing. Yet, this amount of expenditure only provides the manufacturer with an overall clinical success rate of 30.2 percent for the fifty premier firms, and barely over 10 percent if smaller, less experienced firms are included (Table 12.1). (p.334) (p.333)

(p.335) The total capitalized cost per approved drug, in contrast, is the total cost projected for a successfully approved new drug, and there are two estimates for it: $1,240.7 million (in 2005 dollars), based on an earlier cohort study of drugs starting the first-in-human (FIH) testing between 1983 and 1994 (DiMasi, Hansen, and Grabowski 2003), and $2,558.0 million (in 2013 dollars) based on a later cohort of drugs first entering the FIH stage between 1995 and 2007 (DiMasi, Grabowski, and Hansen 2016). Using the GDP price deflator (FRB 2016), we convert the first estimate for total capitalized cost per approved drug from one expressed in 2005 dollars into one in 2013 dollars: $1,241 million × 106.36 / 90.88 = $1,452 million. Comparing the real, inflation-adjusted numbers, we note that the total capitalized cost per approved drug is 76.2 percent higher in the later cohort.

The most recent study (DiMasi, Grabowski, and Hansen 2016) estimates cost over the Phase IV/post-marketing surveillance period, and it too is substantial: $312 million (2013 dollars). However, this figure is not included in the comparative analysis between different cohorts.

It can be readily seen that while the direct outlay incurred during the pre-clinical phase is only 35 percent of the total ($59.9 million out of $169.0 million), the capitalized cost in this period is almost 50 percent of the total ($185.6 million out of $374.7 million). This is because the longer duration from pre-clinical phase to FIH, compared to that from FIH to approval, translates into higher projected capital cost.

Table 12.1 Pharmaceutical Research and Development Process in the United States: Recent Estimates

Basic Research—Preclinical Phase

Human Testing

Total (Basic Research– Approval Only)

Phase I (first-in-human; FIH)

Phase II (safety, dose, efficacy signal)

Phase III (confirmative safety/efficacy)

NDA filing

Phase IV/post-marketing surveillance

Number of study subjects (clinical trial only)

NA

< 100

Hundreds

> 100 and up to thousands

> 100 and up to thousands

Variable

> 100 and up to thousands

Phase duration (months)

31

20

30

31

16

Variable

128 (10.7 years)

Likelihood of phase success (n = 50 top firms)

Variable

84%

56%

64%

Included in Phase III

NA

30.2% (Phase I to approval)*

Likelihood of phase success (n = 800 firms, including many start-up)

Variable

65%

32%

60%

83%

NA

10.3% (Phase I to approval)*

Expected direct cost per investigational drug (2005 $, millions)

59.9

32.3

31.6

45.3

Included in Phase III

NA

169.0

Total capitalized cost per investigational drug (2005 $, millions)

185.6

189.1 (inclusive of all stages from Phase I to approval)

NA

374.7

Total capitalized cost per approved drug, 1st estimate (2005 $, millions)

614.6

626.1 (inclusive of all stages from Phase I to approval)

NA

1,240.7

Total capitalized cost per approved drug, 2nd estimate (2013 $, millions]

1,098.0

1,460.0

312.0

2,558.0

Sources: PhRMA 2015, fig. 14; DiMasi, Hansen, and Grabowski 2016; Hay et al. 2014, table 5; DiMasi and Grabowski 2007; DiMasi, Hansen, and Grabowski 2003.

(*) The ultimate clinical success (FDA approval) rate is the product of the success rate in each phase of human testing and NDA review.

Major Drug Legislation in the United States

Pure Food and Drug Act of 1906. Enacted by Congress in 1906, this was the first federal law regulating food and drug products in the United States. The law was designed to raise the quality and purity standards of food and drugs moving between states in commercial activities, as the active ingredients in the products were required to be clearly and precisely labeled. The purity levels for drugs were established according to the United States Pharmacopeia. A manufacturer would be responsible for the expense of seizure and destruction if its goods were found to be “mislabeled” or “adulterated.” Further, the violating firm’s reputation would suffer, as each conviction was legally required to be published in the FDA Notices of Judgment (Young 1989). This law probably did more to improve the safety of food in America than of pharmaceutical products, and by the 1930s it was becoming obsolete.

Federal Food, Drug, and Cosmetic Act of 1938 (FD&C Act). This law was passed by Congress in 1938 to replace the earlier Pure Food and Drug Act, after (p.336) the death of more than one hundred patients, many of them children, who took a poisonous medicinal product to treat minor infections. The drug, sulfanilamide, had been very effective in tablet form in treating strep throat. The manufacturer developed a new liquid form by using diethylene glycol (the same chemical used in antifreeze) to dissolve the drug. As there was no legal requirement for safety studies to be conducted on new drugs or new formulations, the eager manufacturer marketed the new drug, called Elixir Sulfanilamide, without knowledge of the deadly property of the liquid solution (Ballentine 1981). The FD&C Act significantly increased the FDA’s power and authority to regulate pharmaceutical and food products, requiring that all drugs, devices, and food additives must obtain FDA certification of safety and efficacy before manufacturers could market them to the public. The act has been amended many times since.

Kefauver-Harris Drug Amendments Act of 1962. Under the FD&C Act, drug manufacturers could begin to market a drug if the FDA didn’t act within sixty days to prevent its release. Quality of drug manufacturing was also not previously standardized and enforceable by the agency. The devastating thalidomide disaster in Europe provided a powerful impetus to further strengthen the FDA’s authority over the safety, effectiveness, and quality of drug products, which led to the Kefauver-Harris Drug Amendment Act. Thalidomide was introduced by a German firm as a sedative for pregnant women in the 1950s and was touted as beyond safe, as a minimum lethal dose could not even be established in animal testing. Unfortunately, animal testing was not sufficient to establish safety in pregnant women, but that was discovered too late. By the time the drug was recalled in Europe, Japan, Canada, and other countries, an estimated 5,000 to 10,000 babies had been born with severe deformities and debilities worldwide (Randall 1990). There were only a dozen or so deformed infants tied to thalidomide in the United States, thanks in a large part to FDA’s refusal to approve because the medical officer assigned to review the drug, Canadian-born Dr. Frances Kelsey, didn’t consider the safety data submitted as sufficient. The thalidomide tragedy reaffirmed the vital importance of comprehensive and enforceable safety regulation of pharmaceutical products. Co-sponsored by Sen. Estes Kefauver of Tennessee and Rep. Oren Harris of Arkansas, the Kefauver-Harris Drug Amendment Act was passed unanimously by both houses of Congress and signed into law by President Kennedy in 1962 (FDA 2012). The amendment required that evidence of a drug’s safety and effectiveness must be demonstrated in adequate and well-controlled clinical studies, and no drug could be marketed in the United States before FDA approval. Further, the law required that manufacturers must report any serious side effects after an approved drug enters the market. The Kefauver-Harris Drug Amendment Act of 1962 has been regarded as the foundation for modern drug approval, and has made (p.337) the pharmaceutical development process in the United States the gold standard in the world.

The Orphan Drug Act of 1983. In order to further stimulate private industry R&D in areas with modest market potential, Congress enacted the Orphan Drug Act of 1983 to encourage the development of treatments for rare diseases and conditions. There are four incentives in the legislation: FDA assistance in protocol design for new drug NDA or PLA applications, research grants for pre-clinical and clinical studies of orphan products, granting of seven years of exclusive US marketing rights to the first firm receiving an NDA for an orphan drug within a class,2 and a tax credit for clinical research expenditures. The drug company making the request for an orphan drug IND must show that the disease or condition that the drug is intended to treat affects fewer than 200,000 persons in the United States or, if it affects more than that number, there is no reasonable expectation that the cost of developing and manufacturing the drug will be recovered from sales in the United States (FDA 2013a).

Drug Price Competition and Patent Term Restoration Act of 1984. Commonly known as the Hatch-Waxman Act in honor of its co-sponsors, Republican senator Orrin Hatch of Utah and Democratic congressman Henry Waxman of California, this law was designed with dual objectives. The first was to simplify the regulatory review and approval process for generic products after patent expiration of an original brand-name drug, thereby creating more price competition in the pharmaceutical marketplace. The second was to incentivize drug innovation by restoring some of the patent life on an original brand-name product that was lost during the long development and approval process. Prior to this law, generic drugs in the United States had been subject to the same rigorous testing protocols as the innovator drug. The formidable cost and long duration of trials deterred many firms from entering the generics business. The law stipulates that generics manufacturers merely have to verify that a generic product is both chemically equivalent and bioequivalent to the branded drug in an Abbreviated New Drug Application. Thanks to this legislation, the regulatory process for generics has become considerably shorter and less costly. The law also provides relief to brand-name drugs by extending the market exclusivity period by one-half of the time incurred from IND filing to NDA filing, up to a maximum of five years. With the passing of the Drug Price Competition and Patent Term Restoration Act in 1984, many firms began developing generic drugs, and the generics industry in the United States started its rapid ascent. According to the Generic Pharmaceutical Association (GPhA 2015), 88 percent of prescriptions dispensed in the United States in 2014 were for generics, but they accounted for only 28 percent of total drug spending.

(p.338) Prescription Drug User Fee Act (1992). The Prescription Drug User Fee Act was approved by Congress in 1992 (FDA 2009). This initiative was designed to allow the agency to increase resources devoted to approval review in order to shorten regulatory review times. The two main provisions of PDUFA are a system of per-application user fees that fund increases in the reviewer staff at CDER and an incentive structure whereby the legislation is renewed only if the FDA meets specified performance goals (Carpenter 2004; Mullin 2015). When a company seeks FDA approval for a new drug or biologic prior to marketing, it must submit the NDA along with a fee to support the review process. In addition, companies pay annual fees for each manufacturing establishment and for each prescription drug product marketed. Under PDUFA, industry provides the funding in exchange for an FDA agreement to meet drug-review performance goals that emphasize timeliness. Since 1992, PDUFA has been reauthorized a total of five times, the last one being in 2012, and the next reauthorization is scheduled for 2017 (Kronquist 2011; FDA 2015a). Fees began at $100,000 for each NDA or PLA/ELA. By 2016, the fee schedule had increased to $2,374,200 for an application requiring clinical data, $1,187,100 for an application not requiring clinical data or a supplement requiring clinical data, and $585,200 for each manufacturing establishment (HHS 2015). From a drug company’s perspective, the fee may very well be worth paying (Loftus 2015), as the review and approval time for NMEs has continually and significantly fallen since the enactment of PDUFA (Figure 12.1).

Drug Approval Process in the United States

Figure 12.1 Mean NME Approval Time by the FDA (Months)

Sources: Kronquist 2011 (mean review times for PDUFA I to PDUFA IV); Mullin 2015 (PDUFA V, median review time for standard NMEs for the period from Q1/2013 to Q4/2014; the corresponding figure for priority NMEs was 8 months).

The FDA Modernization Act (1997). This is another major piece of legislation reforming the regulation of food and drug products, including medical devices (p.339) (FDA 2011b). Among the most notable provisions: it allows a drug or device manufacturer to disseminate peer-reviewed journal articles on an off-label use of its product, provided the company is committing to file an NDA on this indication, and it allows drug firms to provide economic information about their products, such as cost offsets, cost-effectiveness and cost-benefit analyses, and so on, to formulary committees and managed care organizations.

Biologics Price Competition and Innovation Act (2009). The Biologics Price Competition and Innovation Act of 2009 was part of the Patient Protection and Affordable Care Act signed into law by President Obama in 2010 (FDA 2016d). Modeled after the Hatch-Waxman Act of 1984, the BPCI Act establishes an abbreviated approval process for biosimilars, which are biological products shown to be highly similar to an FDA-approved innovator biologic drug. The law also awards an originator biologic drug up to twelve years of market exclusivity from the approval date, regardless of the patent status. Since the law’s enactment, the FDA has outlined requirements for biosimilar manufacturers to demonstrate similarity and interchangeability with the original reference product. Unlike generics, which are essentially exact replicas of the original branded drug, biosimilars are made in living organisms, and despite being “highly similar,” they are not expected to be an exact copy of the original product. Consequently, to demonstrate interchangeability, the sponsor for a biosimilar drug will likely need to conduct clinical studies to compare the biosimilar’s therapeutic benefits and risks against the original drug, in addition to providing data from pharmaceutical equivalence and bioequivalence testing, which do not require clinical trials. In contrast, interchangeability between generics and the original reference drug rarely requires data from clinical studies (Amgen Biosimilars 2016).

Food and Drug Administration Safety and Innovation Act of 2012. In addition to reauthorizing PDUFA for the next five years, this act establishes user fees for medical device products, generics, and biosimilar products. It provides further incentives to manufacturers (granting a market exclusivity period for a pediatric indication) if they conduct clinical studies in pediatric patients, an area of drug R&D that has often experienced underinvestment. It also provides additional authority to the FDA to manage drug shortages and ensure safety of the drug supply chain worldwide (FDA 2013b).

Annual NME Approvals by the FDA Since 2000

There has never been a lack of criticism, particularly by the pharmaceutical industry, and sometimes by patients facing terminal conditions and physicians who treat them, of the FDA regulatory process being unduly stringent and excessively (p.340) lengthy, contributing to the rapidly escalating costs associated with drug R&D, declining approval rates, and slower diffusion of important breakthrough medicines (DiMasi, Seibring, and Lasagna 1994; Lesney 2000; Wall Street Journal 2016). Critics frequently argue that for rapidly progressing terminal conditions, the risk of an inadvertent approval of a “bad” drug has less significance, and the weighing of risks should favor access far more than it generally does. They often present data showing there is a “drug lag” in the United States: new drugs are introduced into the US market later than in the rest of industrialized world (Burstall 1991; Andersson 1992a, 1992b; Reichert and Healy 2001).

Drug Approval Process in the United States

Figure 12.2 Annual NME Approvals by the FDA, 2000–2015

Source: FDA 2016e.

However, the concern over declining drug approval in the United States may be premature, especially if we focus on NMEs, which are novel compounds never before introduced in the US market. In Figure 12.2, it can be seen that although there was a drop in the number of NMEs approved between 2004 and 2010, particularly for those with the priority rating (P), the trend reversed direction beginning in 2011, and in 2015 the agency approved the second-highest number (forty-five) of NMEs on record (and the highest number, thirty, of P-rated NMEs). Perhaps more significant, as shown in Figure 12.3, is that the rate for first action approval (approval based on the first complete submission of an NDA) for NMEs hit an all-time high of 95 percent in 2015, after never going above 60 percent before 2011 (Jenkins 2015; FDA 2016e). Furthermore, between 2000 and 2008, the median approval time for P-rated NMEs was only six months, compared to more than twelve months for those considered standard (non-breakthrough and yet novel; S-rated) compounds. Both numbers were significantly less than in the 1990s, (p.341) when approval times were about 8 months for priority drugs and 14.6 months for standard drugs (FDA 2014).

Drug Approval Process in the United States

Figure 12.3 Annual NME First Action Approval Rates, 2005–2015

Source: Jenkins 2015.

In addition to assigning a standard or priority review status to all NMEs, the FDA has several other avenues available to accelerate drug approval for truly innovative products. The “Accelerated Approval” designation is used for drugs that target serious or life-threatening diseases for which no therapy currently exists. This designation is often invoked for investigational cancer drugs, which often can be approved based on surrogate endpoints. A surrogate endpoint often is not a measure of clinical benefit but is highly correlated with a definitive clinical endpoint. Clinical trials using surrogate endpoints as proxy for clinical benefits are usually much shorter and require fewer subjects for statistical power, compared to those that rely on conclusive clinical endpoints (such as survival). The FDA can also grant “Fast-Track” designation for products that demonstrate a potential to address unmet need for serious illnesses. This designation allows the product’s manufacturer to seek regular input and agreement from the agency early in the clinical development program in order to conduct efficient, focused studies on the way to the eventual NDA submission. Lastly, the “Breakthrough Therapy” designation is given to investigational compounds that demonstrate the potential for substantial improvement over current therapy. Similar to “Fast Track,” this status entitles the manufacturer to receive intensive guidance from the FDA during the clinical development stage. It is important to note that an innovative drug may receive more than one designation during the development and approval process (FDA 2015b).

(p.342) The issue of drug approval lag between the United States and other industrialized economies will be examined in more details in Chapter 13.

Regulatory Approval of Generic Drugs in the United States

The Hatch-Waxman Act greatly simplifies the approval process for generic drugs, allowing firms to seek approval for an ANDA, in which the generic manufacturer merely has to verify that the generic product is bioequivalent to the brand-name drug. In order to understand this term, it is important to understand the three standards of equivalence between drugs: pharmaceutical equivalence, bioequivalence, and therapeutic equivalence. According to the FDA “Orange Book,” Approved Drug Product with Therapeutic Equivalence Evaluations, pharmaceutical equivalence means that the generic drug meets four criteria:

  • The generic product contains the same active ingredients as the innovator drug (inactive ingredients may vary).

  • It must be identical in strength, dosage form, and route of administration.

  • It must be labeled the same, listing the same use indications.

  • It must meet the same batch requirements for identity, strength, purity, and quality and must be manufactured under the same strict standards of FDA’s Good Manufacturing Practice (GMP) regulations required for innovator products.

Bioequivalence imposes an additional criterion—that the drug be tested on a number of healthy subjects to verify that the active ingredient is absorbed into the bloodstream at the same rate as for the brand-name product. Under normal conditions, bioequivalence is the appropriate criterion of equivalence for patient. Therapeutic equivalence is yet more stringent than bioequivalence, as patients may be tested, along the lines of a clinical trial, to demonstrate that the clinical effect of the generic drug is identical to that of the innovator drug. The reason therapeutic equivalence is more stringent than bioequivalence is that the presence of inert substances in many medicines (so-called binders and fillers in capsules and tablets) means that, for some patients, the clinical reaction to a generic drug may differ somewhat from that of the original drug. These differences are normally small and clinically insignificant; however, the FDA lists drugs that are “Therapeutically Equivalent,” and this list is particularly important for some drugs that have a “Narrow Therapeutic Index,” meaning that the body is highly sensitive to slight differences in concentration of active ingredient (FDA 2016g).

(p.343) The FDA has gone to great lengths to assure physicians that there is no need to conduct further tests of equivalence beyond the aforementioned if they are considering switching patients from a brand-name product to the generic product, from a generic product to the brand-name product, or from one generic product to another. In the FDA’s letter to physicians dated January 28, 1998, the agency went even further to assure practitioners of the equivalence of generic drugs:

In addition to tests performed prior to market entry, FDA regularly assesses the quality of products in the marketplace and thoroughly researches and evaluates reports of alleged drug product inequivalence. To date, there are no documented examples of a generic product manufactured to meet its approved specifications that could not be used interchangeably with the corresponding brand-name drug. Questions have been raised in the past, as well, regarding brand name and generic products about which there could be concern that quality failures might represent a public safety hazard. FDA has performed post-marketing testing on many of these drugs to assess their quality. In one instance, more than 400 samples of 24 marketed brand-name and generic drug products were tested and found to meet the established standards of purity and quality. Because patients may pay closer attention to their symptoms when the substitution of one drug product for another occurs, an increase in symptoms may be reported at that time, and anecdotal reports of decreased efficacy or increased toxicity may result. Upon investigation by FDA, no problems attributed to substitution of one approved drug product for another has occurred. (FDA 1998)

The substitution of an FDA-approved generic product for a brand-name drug is allowed in all states in the United States, and pharmacists can make the substitution without getting permission from the physician, unless the prescription order specifically states “dispense as written.”

Regulatory Approval of Biosimilars in the United States

Biosimilars are therapeutic biologics that are highly similar to the original brand-name biological drug (often called the “reference product”) in structure, biological profile, purity, potency, safety, and efficacy. Many people regard the status of biosimilars vis-à-vis the reference biological product as the equivalent of generic drugs to the reference small-molecule chemical product, but there are some important differences between a biosimilar drug and a generic drug. Unlike generics, (p.344) which are essentially an exact replicate of the original drug, biosimilars are made in living organisms with very complex biotechnology processes. As pointed out by Amgen, one of the world’s biotechnology pioneers and the largest biotechnology company today:

The complex manufacturing process requires careful design of controls, precise measurements and strict adherence to protocol, as any changes can potentially influence the quality of the final product, including the structure, function and purity of the active ingredient. . . . Since the biological production processes are proprietary to each manufacturer, it is impossible for another manufacturer to precisely replicate the manufacturing process of the original biologic or the active ingredient of the protein product. (Amgen Biosimilars 2016)

Therefore, despite being “highly similar” to the original product, biosimilars are not the same as the original. Thus, the comparison between generic and brand-name products in the case of traditional pharmaceutical (small-molecule) products is quite different from the case of biosimilar (large-molecule) products and their original brand-name product.

The BPCI Act was modeled after the Hatch-Waxman Act. The BPCI Act established an abbreviated approval process for biological products that are shown to be highly similar (biosimilar) to an FDA-approved innovator biologic drug, thereby saving some of the time and resources it would take for the biosimilar products to reach the market once patent on the originator expires (FDA 2016d). According to the FDA “Purple Book” (FDA 2016h), “A biological product may be demonstrated to be ‘biosimilar’ if data show that the product is ‘highly similar’ to the reference product notwithstanding minor differences in clinically inactive components and there are no clinically meaningful differences between the biological product and the reference product in terms of safety, purity and potency.” Data from analytical studies, animal studies, and one or more clinical studies including the assessment of immunogenicity and pharmacokinetics or pharmacodynamics must be obtained to show biosimilarity. Additionally, the following conditions must be met (Biosimilars Council 2015):

  • The biosimilar product and reference product have the same mechanism of action.

  • The indications for use for the biosimilar product must be among the approved indications of the reference product.

  • (p.345) They must have the same route of administration, dosage form, and strength.

  • The biosimilar product is manufactured according to GMP standards.

Once a generic product has been shown to be pharmaceutically equivalent and bioequivalent to the reference product, it will be considered as substitutable for and interchangeable with the reference product, or any other generic drug for the same reference product (FDA 2016g). However, as biosimilars are not identical to the reference product, the threshold for substitutability or interchangeability between a biosimilar and the reference product, or among biosimilars, is significantly more difficult to satisfy than for generics. According to the FDA “Purple Book,” “an ‘interchangeable’ biological product is a product that has been shown to be biosimilar to the reference product, and can be expected to produce the same clinical result as the reference product in any given patient. In addition, to be determined as an interchangeable biological product, it must be shown that for a biological product that is administered more than once to an individual the risk in terms of safety or diminished efficacy of alternating or switching between use of the biological product and the reference product is not greater than the risk of using the reference product without such alternation or switch.” In other words, a biosimilar drug is not automatically substitutable for and interchangeable with the original drug until sufficient clinical study data can prove it.

Both of the first two biosimilar drugs approved in the United States, Zarxio (by Sandoz) and Inflectra (by Celltrion), are classified by the FDA as “biosimilar,” but neither one is considered as “interchangeable” with the reference products, Neupogen (filgrastim; Amgen) and Remicade (infliximab; J&J), respectively (FDA 2016h). To date, the FDA has issued five guidance documents on regulatory pathways for biosimilar and interchangeable biosimilar products, but none is considered final (Amgen 2015). Therefore, this area is very much an evolving field. Interestingly, there is no distinction between biosimilarity and interchangeability for approved biosimilars in the European Union, although their experience with biosimilar drugs has been substantially longer. We will return to this topic in more detail in Chapter 13.

(p.346) References

Bibliography references:

Amgen. 2015. 2015 Trends in Biosimilars Report. Thousand Oaks, CA: Amgen.

Amgen Biosimilars. 2016. “Biosimilars Versus Generics.” http://www.amgenbiosimilars.com/the-basics/biosimilars-versus-generics. Accessed July 2016.

Andersson, F. 1992a. “The Drug Lag Issue: The Debate Seen from an International Perspective.” International Journal of Health Services 22: 53–72.

Andersson, F. 1992b. “The International Diffusion of New Drugs: A Comparative Study of Seven Industrialized Countries.” Journal of Research in Pharmaceutical Economics 4, no. 2: 43–62.

Ballentine, C. 1981. “Sulfanilamide Disaster: Taste of Raspberries, Taste of Death, the 1937 Elixir Sulfanilamide Incident.” FDA Consumer Magazine, June.

Biosimilars Council. 2015. The Next Frontier for Improved Access to Medicines: Biosimilars and Interchangeable Biologic Products. Washington, DC: Biosimilars Council, Generic Pharmaceutical Association.

Burstall, M. L. 1991. “Europe After 1992: Implications for Pharmaceuticals.” Health Affairs, Fall, 157–171.

Carpenter, D. P. 2004. “The Political Economy of FDA Drug Review: Processing, Politics, and Lessons for Policy.” Health Affairs 23: 52–63.

DiMasi, J. A., and H. G. Grabowski. 2007. “The Cost of Biopharmaceutical R&D: Is Biotech Different?” Managerial and Decision Economics 28: 469–479.

DiMasi, J. A., H. G. Grabowski, and R. W. Hansen. 2016. “Innovation in the Pharmaceutical Industry: New Estimates of Development Costs.” Journal of Health Economics 47: 20–33.

DiMasi, J. A., R. W. Hansen, and H. G. Grabowski. 2003. “The Price of Innovation: New Estimates of Drug Development Costs.” Journal of Health Economics 22: 151–85.

DiMasi, J. A., F. M. Seibring, and L. Lasagna. 1994. “New Drug Development in the United States from 1963 to 1992.” Clinical Pharmacology and Therapeutics 55: 15.

Faust, R. 1971. “Project Selection in the Pharmaceutical Industry.” Research Management 14: 46–55.

FDA (Food and Drug Administration). 1995. Benefit Vs. Risk: How CDER Approves New Drugs. FDA Consumer Special Report on New Drug Development in the United States. Washington, DC: Government Printing Office, January.

FDA (Food and Drug Administration). 1998. “Therapeutic Equivalence of Generic Drugs: Letter to Health Practitioners.” January 28. http://www.fda.gov/Drugs/DevelopmentApprovalProcess/HowDrugsareDevelopedandApproved/ApprovalApplications/AbbreviatedNewDrugApplicationANDAGenerics/ucm073182.htm. Accessed July 2016.

FDA. 2009. “Prescription Drug User Fees—Overview.” http://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm118833.htm. Accessed July 2016.

FDA. 2011a. “FDA Commissioner announces Avastin decision.” FDA News Release. Nov. 18.

FDA. 2011b. “The FDA Modernization Act of 1997.” http://www.fda.gov/RegulatoryInformation/Legislation/SignificantAmendmentstotheFDCAct/FDAMA/ucm089179.htm. Accessed July 2016.

FDA. 2012. “Kefauver-Harris Amendments Revolutionized Drug Development.” October.

FDA. 2013a. “Regulatory Information: Orphan Drug Acts—Excerpts.” http://www.fda.gov/regulatoryinformation/legislation/significantamendmentstothefdcact/orphandrugact/default.htm. Accessed July 2016.

(p.347) FDA. 2013b. “Background on FDA Safety and Innovation Act.” http://www.fda.gov/RegulatoryInformation/Legislation/SignificantAmendmentstotheFDCAct/FDASIA/ucm358951.htm. Accessed July 2016.

FDA. 2014. “CDER Approval Times for Priority and Standard NMEs and New BLAs, CY 1993–2008.” http://www.fda.gov/downloads/Drugs/DevelopmentApprovalProcess/HowDrugsareDevelopedandApproved/DrugandBiologicApprovalReports/UCM123957.pdf. Accessed July 2016.

FDA. 2015a. “PDUFA V: Fiscal Years 2013–2017.” http://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm272170.htm. Accessed July 2016.

FDA. 2015b. “Fast Track, Breakthrough Therapy, Accelerated Approval, Priority Review.” http://www.fda.gov/forpatients/approvals/fast/ucm20041766.htm. Accessed July 2016.

FDA. 2016a. “Post-Marketing Surveillance Programs.” http://www.fda.gov/Safety/MedWatch. Accessed July 2016.

FDA. 2016b. “MedWatch: The FDA Safety Information and Adverse Event Reporting Program.” http://www.fda.gov/Safety/MedWatch. Accessed July 2016.

FDA. 2016c. “Accelerated Approval Program.” http://www.fda.gov/Drugs/ResourcesForYou/HealthProfessionals/ucm313768.htm. Accessed July 2016.

FDA. 2016d. “Implementation of the Biologics Price Competition and Innovation Act of 2009.” http://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/ucm215089.htm (Accessed July 2016.

FDA. 2016e. “New Molecular Entity (NME) Drug and New Biologic Approvals.” https://www.fda.gov/Drugs/DevelopmentApprovalProcess/HowDrugsareDevelopedandApproved/DrugandBiologicApprovalReports/NDAandBLAApprovalReports/ucm373413.htm. Accessed July 2016.

FDA. 2016f. “Drug Recalls.” http://www.fda.gov/drugs/drugsafety/DrugRecalls. Accessed July 2016.

FDA. 2016g. “Orange Book Preface, 36th Edition.” http://www.fda.gov/Drugs/DevelopmentApprovalProcess/ucm079068.htm. Accessed July 2016.

FDA. 2016h. “Purple Book: Lists of Licensed Biological Products with Reference Product Exclusivity and Biosimilarity or Interchangeability Evaluations.” http://www.fda.gov/drugs/developmentapprovalprocess/howdrugsaredevelopedandapproved/approvalapplications/therapeuticbiologicapplications/biosimilars/ucm411418.htm. Accessed July 2016.

FRB (Federal Reserve Bank of St. Louis). 2016. “Economic Data: Gross Domestic Product/Implicit Price Deflator.” https://fred.stlouisfed.org/series/GDPDEF. Accessed July 2016.

GPhA (Generic Pharmaceutical Association). 2015. Generic Drug Savings in the US, 7th Annual Edition: 2015. Washington, DC: GPhA.

Hanauer, S. B. 2007. “Where Do Our Priorities Lie?” Nature Clinical Practice Nephrology 3, no. 9: 463.

Hay, M., D. W. Thomas, J. L. Craighead, et al. 2014. “Clinical Development Success Rates for Investigational Drugs.” Nature Biotechnology 32, no. 1: 40–51.

HHS (Department of Health and Human Services). 2015. “Food and Drug Administration: Prescription Drug User Fee Rates for Fiscal Year 2016.” Federal Register 80, no. 148: 46028–46032, August 3.

HHS. 2003. “Agency Information Collection Activities: Proposed Collection; Comment Request; Guidance for Industry on Submitting and Reviewing Complete Responses to Clinical Holds.” Federal Register 68, no. 76: 19545–19546, April 21.

(p.348) Hussain-Gambles, M. 2003. “Ethnic Minority under Representation in Clinical Trials. Whose Responsibility Is It Anyway?” Journal of Health Organ Management 17, no. 2: 138–143.

Institute of Medicine. 2007. Challenges for the FDA: The Future of Drug Safety, Workshop Summary. Washington, DC: National Academies Press.

Jenkins, J. K. 2015. “CDER New Drug Review: 2015 Update.” FDA Presentation, December 14. http://www.fda.gov/downloads/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/CDER/UCM477020.pdf. Accessed July 2016.

Kronquist, A. R. 2011. “The Prescription Drug User Fee Act: History and Reauthorization Issues for 2012.” Backgrounder, December 21. Heritage Foundation, Washington, DC.

Lesney, M. 2000. “What About the FDA?” Modern Drug Discovery 3, no. 8: 29–33.

Loftus, P. 2015. “Drug Makers Buy Pricey Vouchers to Speed Products to Market.” Wall Street Journal, November 1.

Marroum, P. J., J. Goburu. 2002. “The Product Label: How Pharmacokinetics and Pharmacodynamics Reach the Prescriber.” Clinical Pharmacokinetics 41, no. 3: 161–169.

McClellan, M. 2007. “Drug Safety Reform at the FDA—Pendulum Swing or Systematic Improvement?” New England Journal of Medicine 356: 1700–1702.

Meinert, C. L. 1986. Clinical Trials: Design, Conduct, and Analysis. New York: Oxford University Press.

Moore, T. J., and C. D. Furberg. 2014. “Development Timelines, Clinical Testing, Postmarket Follow-up, and Safety Risks for the New Drugs Approved by the US FDA: The Class of 2008.” Journal of the American Medical Association Internal Medicine 174, no. 1: 90–95.

Mullin, T. 2015. “PDUFA Background and Reauthorization Process.” FDA Presentation, July 15. http://www.fda.gov/downloads/ForIndustry/UserFees/PrescriptionDrugUserFee/UCM455134.pdf. Accessed July 2016.

New York Times. 2004. “Looking for Adverse Drug Effects.” [Editorial.] November 27.

OTA (Office of Technology Assessment, US Congress). 1993. Pharmaceutical R&D: Costs, Risks, and Rewards. Washington, DC: Government Printing Office.

PhRMA (Pharmaceutical Research and Manufacturers of America). 2015. 2015 Profile: Biopharmaceutical Research Industry. Washington, DC: PhRMA.

Randall, T. 1990. “Thalidomide Has a 37 Year History.” Journal of the American Medical Association 263, no. 11: 1474.

Reekie, W. D. 1978. “Price and Quality Competition in the United States Drug Industry.” Journal of Industrial Economics 26: 223–237.

Reichert, J., and E. Healy. 2001. “Biopharmaceuticals Approved in the EU 1995–1999: A European Union–United States Comparison.” European Journal of Pharmaceuticals and Biopharmaceuticals 51: 1–7.

Sharav, V. H. 2003. “Children in Clinical Research: a Conflict of Moral Values.” American Journal of Bioethics 3, no. 1: InFocus.

Temple, R. 2002. “Policy Developments in Regulatory Approval.” Statistics in Medicine 21: 2939–2948.

Wall Street Journal. 2016. “Where’s the Drug, FDA?” [Editorial.] June 30.

Wiggins, S. N. 1981. “Product Quality Regulation and New Drug Introductions: Some New Evidence from the 1970s.” Review of Economics and Statistics 63, no. 4: 615–619.

Young, J. H. 1989. Pure Food: Securing the Federal Food and Drugs Act of 1906. Princeton, NJ: Princeton University Press.

Notes:

(1.) For cancer drugs, Phase I trials involve cancer patients, not healthy individuals.

(2.) By specifying marketing exclusivity within a therapeutic class, the FDA extends a company’s marketing rights beyond the molecule itself (normally covered by patents, anyway) to other, similar drugs with different molecules not covered by the firm’s patents—so-called me-too drugs.