Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, ECONOMICS AND FINANCE (oxfordre.com/economics). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 23 August 2019

The Effect of Government Policy on Pharmaceutical Drug Innovation

Abstract and Keywords

Drug companies are profit-maximizing entities, and profit is, by definition, revenue less cost. Here we review the impact of government policies that affect sales revenues earned on newly developed drugs and the impact of policies that affect the cost of drug development. The former policies include intellectual property rights, drug price controls, and the extension of public drug coverage to previously underinsured groups. The latter policies include regulations governing drug safety and efficacy, R&D tax credits, publicly funded basic research, and public funding for open drug discovery consortia.

The latter policy, public funding of research consortia that seek to better understand the cellular pathways through which new drugs can ameliorate disease, appears very promising. In particular, a better understanding of human pathophysiology may be able to address the high failure rate of drugs undergoing clinical testing. Policies that expand market size by extending drug insurance to previously underinsured groups also appear to be effective at increasing drug R&D. Expansions of pharmaceutical intellectual property rights seem to be less effective, given the countervailing monopsony power of large public drug plans.

Keywords: pharmaceutical R&D, innovation, cost, patents, drug price controls, health technology assessment, open science biomedical R&D, public drug plans, intellectual property, regulation of drug safety and efficacy, health economics

Introduction

In this article we review the literature on the impact of government policy on pharmaceutical and biopharmaceutical (hereafter “drug”) research and development (R&D) undertaken by drug companies. New drugs developed by industry over the last 60 years have been an important source of population health gains (Cutler, 2005; Lichtenberg, 2005a). But what determines private sector drug R&D? Drug companies are profit-maximizing entities, and profit is, by definition, revenue minus cost. Thus, the greater the expected revenues earned from newly developed drugs and the lower the costs of developing and producing new drugs and getting these drugs prescribed by physicians, the more attractive drug R&D is to drug companies.

R&D costs include outlays on basic research and design of drugs that are developed in-house, the costs of obtaining rights to promising drug candidates developed by other firms, the costs of preclinical testing and clinical trials, and the costs of obtaining regulatory approval. R&D costs per approved drug include the cost of “dry wells”—the drugs that are abandoned during the testing process. For instance, in 2006 Pfizer halted development of its lipid-modulating agent torcetrapib after it discovered from a Phase III clinical trial that the drug had fatal side effects. Pfizer had already spent $800 million when it abandoned this drug candidate (NewsFeature, 2011). Unfortunately, failures are not uncommon. DiMasi, Grabowski, and Hansen (2016) examined the failure rate of 106 randomly selected new drugs developed in-house by 10 large US drug companies. They found that only 12% of drugs that were tested in humans over the period 1995 to 2007 were eventually approved by the regulator, the US Food and Drug Administration (FDA). Wong and colleagues, analyzing data on trials for over 21,000 compounds tested between January 2000 and October 2015, found that only 13.8% of all drug development programs eventually lead to approval (Wong, Siah, & Lo, 2019).

Cash outlays on drug R&D constitute only about one half of total R&D costs; financing costs constitute the other half (DiMasi et al., 2016). Established drug companies have historically financed R&D using retained earnings. The cost of equity capital is roughly equal to the risk-free rate plus the equity risk premium for the stock market; the beta for the pharmaceutical industry is close to one (Damodaran, 2017).

Once a drug is approved, firms face additional R&D costs. These may include post-approval research requested by FDA or additional trials and analysis required to secure reimbursement. DiMasi and colleagues (2016) recently estimated these costs at $312 million after discounting them to the time of approval. Also, some approved drugs are withdrawn from the market due to safety concerns; in this instance, the drug company faces the costs of defending and settling lawsuits. The drug company Merck paid $4.85 billion to settle US claims for damages incurred from the use of its drug Vioxx; it paid another $1.2 billion in legal fees (Berenson, 2007; Wizemann, Robinson, & Giffin, 2009). Finally, the drug company incurs the costs to manufacture and market the drug.

Drug companies will naturally focus their R&D in areas where expected profits are highest. Realized profits, in turn, affect future drug R&D. More specifically, retained earnings—gross profits less dividend payments to shareholders—are the primary source of funds used to finance R&D projects. (More recently, companies have issued more debt given its low cost. Debt, however, remains a small share of capital.) Thus, profits from R&D affect future outlays on R&D, a finding that has been confirmed in various empirical studies. Using a panel of 14 international firms over four years, Vernon (2005) posited a regression model of firm–year–specific pharma R&D intensity. In this model, R&D depends on firm–year–specific expected returns and lagged retained earnings. Expected returns were measured using current drug profit margins. The estimates indicate that both lagged retained earnings and expected returns are important determinants of firm R&D intensity. Lagged retained earnings, however, exerted the larger effect. These estimates were robust to a variety of different model specifications and were consistent with earlier work on the issue by Grabowski and Vernon (2000). The importance of retained earnings in the drug industry was also highlighted by Scherer (2001) in his analysis of the impact of gross profits on R&D in the US pharmaceutical industry. Scherer found that year-over-year changes in the pharma industry’s gross profits are highly predictive of changes in drug R&D expenditure. Similar findings have been observed for drug companies located outside the United States (Lee & Choi, 2015; Malmberg, 2008).

Government policy has important effects on the profitability of drug R&D; it affects both the sales revenues accruing from R&D and the costs of undertaking it. For instance, governments extend intellectual property rights (IPR) to drug companies, allowing them to be the exclusive seller of a new drug for a period of time. This gives the drug company the opportunity to sell more units, and at a higher price than would be possible without the IPR. Governments also subsidize the cost of prescription drugs for seniors and the indigent; this increases their market demand and thus drug company sales revenues. At the same time, these public drug plans often require a substantial price discount as a condition for listing the drug. Governments provide tax subsidies for private R&D; this reduces drug companies’ costs. At the same time, governments impose minimum standards on the quality of the evidence required for a new drug to be approved; this increases costs (assuming that the standards constraint is binding). Governments also support basic research conducted in the academic sector; this research is an important complement to the applied research undertaken by drug companies (Cockburn & Henderson, 2001).

In the remainder of this article we survey the literature on how government policy affects drug R&D, either by affecting the revenues earned on new drugs or by affecting private R&D costs. One aspect of government policy that we do not cover in depth is government regulation of the quality of the evidence required for a new drug to be approved. Nor do we consider the effect of tort law and thus drug liability litigation on drug innovation. Both of these topics have been addressed in a review by Malani and Philipson (2012). Finally, we do not consider drug R&D undertaken outside of the for-profit pharmaceutical industry, including drug development and testing conducted by non-profit and academic ventures. Although the drug R&D undertaken by these organizations is growing (Moos & Mirsalis, 2009), they nonetheless account for a tiny share of new drug approvals.

Government Policy That Affects Revenues Earned on the Sale of New Drugs

Policies That Affect Drug Prices

Sood, de Vries, Gutierrez, Lakdawalla, and Goldman (2009) find that international reference pricing and other drug price controls are ubiquitous among the public drug plans operating in developed countries, and these price controls have substantial effects on drug company revenues. While drug plans and consumers benefit from lower prices in the short term, price controls would be expected to reduce R&D. Several empirical studies, reviewed here, confirm this.

Giaccotto, Santerre, and Vernon (2005) estimated a regression of global drug industry R&D intensity (the ratio of global drug R&D to sales) over the 50-year period from 1952 to 2001 as a function of an index of real US drug prices and a variety of control variables. (Although the focus was global R&D, the US market is the most important market globally.) The authors estimate that a 10% increase in the growth of real US drug prices is associated with nearly a 6% increase in the growth of global R&D intensity. Golec, Hegde, and Vernon (2010) found that the mere threat of drug price regulation can affect subsequent drug R&D spending. Their study used the Clinton administration’s Health Security Act (HSA) of 1993 as a natural experiment to study the issue. The proposed legislation included drug price controls, and during the time that the legislation was debated, the US drug industry curtailed R&D projects in the event that the legislation passed. The authors found that the HSA reduced R&D spending by about $1 billion even though it never became law.

Policies That Affect Unit Sales Volumes

Several studies have examined the effect of increases in unit sales on drug R&D. To do so, some have used the introduction of government drug subsidy programs as a type of natural experiment. Blume-Kohout and Sood (2013) focused on the 2006 extension of drug subsidies to US seniors via the Medicare Part D program. The authors note that as of 2010, approximately 28 million Medicare beneficiaries were enrolled in Part D plans. In a separate study, Duggan and Scott-Morton (2010) found that Medicare Part D increased prescription drug use and drug company revenues, despite the price decreases negotiated by private insurers. For a therapeutic class with the average Medicare market share in their sample (42%), they found that Part D increased sales revenues by 14%.

Blume-Kohout and Sood compared the change over time in pre-clinical testing and early-stage clinical trials for therapeutic classes (TC) that tend to be used by seniors (such as drugs for Alzheimer’s disease) with TCs that are used by younger individuals (such as contraceptives). In particular, the authors regressed TC-specific R&D on the fraction of consumers of a particular TC who are eligible for Medicare Part D (their definition of market size) and several control variables (including demographic changes and changes in public biomedical research funding). The authors estimate that a 1% increase in TC market size yields a 2.8% increase in clinical trials.

Finkelstein (2004) examined drug company R&D activity following the introduction of US government policies that increased vaccines unit sales. Finkelstein focused on the Centers for Disease Control and Prevention recommendation (in 1991) that all infants be vaccinated against hepatitis B, Medicare coverage (in 1993) of influenza vaccinations for Medicare beneficiaries, and the introduction (in 1986) of an insurance fund that reduced the liability exposure of manufacturers of selected childhood vaccines. (These childhood vaccines targeted polio, diphtheria, tetanus, measles, mumps, rubella, and pertussis.) The first two of these three policy changes expanded the size of the hepatitis B and influenza vaccine market, and the third policy reduced expected costs of selling the selected childhood vaccines. Finkelstein’s model focused on changes over time in both pre-clinical and clinical trials for vaccines targeted by these policies; she compared these against trials for other vaccines that were unaffected. The policies had an immediate and sustained effect on clinical trials for the targeted vaccines but no apparent effect on pre-clinical R&D. This suggests that the manufacturers had already conducted pre-clinical R&D on these vaccine candidates, and the policy expedited their clinical testing.

Acemoglu and Linn (2004) (hereafter AL) also studied the relationship between market size and drug launches in the United States. Data were collected from 1970 to 2000 for 33 different therapeutic categories. Their study relied on demographic changes to identify the relationship; different drugs have distinct age-use profiles, so that changes in the sizes of age groups over time will induce variation in drug market size. Using actual market size, even if it is affected by demographic changes, may be problematic, however. First, drug companies may develop new drugs in anticipation of demographic change, and the availability of new drugs may cause additional individuals to consult with physicians and become diagnosed. Second, manufacturers of these new drugs may also price-compete, resulting in additional sales. In essence, market size is itself affected by R&D.

The studies by Blume-Kohout and Sood and Finkelstein were unaffected by this potential reverse causality because they exploited the exogenous variation in market size caused by an expansion of public coverage. AL elected to deal with this by using a measure of “potential” market size in lieu of actual market size. Their market size variable reflects what total spending on a TC would have been in different years had age–group–specific TC budget shares remained constant (at their 1997 values) over time. These budget shares reflect the total spent on the TC (by both patients in a given age group and their insurers) divided by age–group–specific income. This was then multiplied by total income of all US citizens in the age group in a given year (which reflects the number of consumers and their income) to get total spending for that age group in the year. This approach addresses the first problem insofar as the drug budget shares are time-invariant, so that the market size measure cannot depend on new drug introductions. (Moreover, AL present theoretical and empirical evidence that drug companies do not time market launches to coincide with the demographic changes.) The use of the potential market size variable also deals with the second problem identified above insofar as this variable is unaffected by price dynamics.

AL estimate that a 1% increase in potential market size is associated with a 3.5% increase in the number of new molecular entities (NMEs; drugs with new active ingredients) introduced into the US market. This estimate is surprisingly large. By way of contrast, Blume-Kohout and Sood estimated that the market size–clinical trial elasticity is 2.8%; the market size–NME elasticity must be much smaller given the high failure rate in clinical trials. Lichtenberg (2005a) finds, among other results, that a 1% increase in the number of people with cancer leads to a 0.58% increase in chemotherapy regimens. However, AL note that their large estimate is an artefact of the use of the constructed, not actual market size. Actual market size increases by 4% for every 1% increase in potential market size. Dividing their estimate by 4 yields an elasticity of 0.88, more in line with other estimates.

Dubois, de Mouzon, Scott-Morton, and Seabright (2015) also relied on demographic changes to estimate the impact of market size on NMEs. However, unlike AL, they use country-level data on actual TC specific market size and an instrumental variables estimator given the possibility of reverse causality in the regressions of R&D on market size. They instrument for country market size using the worldwide number of deaths from diseases in the relevant therapeutic class, as well as country gross domestic product. The authors’ preferred elasticity estimate is 0.25. One reason that this estimate is markedly smaller than the AL adjusted estimate is that Dubois et al. consider the impact of market size in countries other than the United States. Many of these countries use drug price controls (Sood et al., 2009) so that the revenue increases from market size expansions are smaller than in the United States.

The antibiotic market provides further evidence on the impact of market size on drug R&D. Antibiotic stewardship initiatives, intended to control resistance, have also reduced unit sales and hence sales revenues. Although we found no academic studies that examine the impact of these stewardship initiatives on antibiotics R&D, pharmaceutical industry executives indicate that the development of new antibiotics is simply not profitable if their use—and hence sales revenues—continue to be restricted (Power, 2006). Yet there is an urgent need to develop new antibiotics in light of the looming crisis of antibiotic-resistant diseases. Outterson, Powers, Daniel, & McClellan (2015) propose some non-standard revenue models that might encourage additional R&D into new antibiotics. One option they consider is to pay drug developers a fixed sum for the right to use an antibiotic in a particular population, which may eventually reduce morbidity and mortality from antibiotic-resistant infections.

Policies That Affect Both Prices and Unit Sales Volumes

Government policies that reward a drug company with a period of market exclusivity for a new drug have the effect of simultaneously increasing the drug’s unit sales (in that generic competitors accrue no sales) and allowing for higher prices to be charged than otherwise. One such policy was the Orphan Drug Act (ODA) enacted in the United States in 1983. The ODA provides 7-year exclusive marketing rights to companies that develop drugs to treat rare diseases, defined as diseases that affect fewer than 200,000 people in the United States annually. Notably, the market exclusivity right applies to both the drug and the therapeutic indication. In addition, the ODA provides a 50% tax credit for expenditures incurred in the R&D of a rare-disease drug, regulatory fee waivers, priority regulatory review, and other incentives (U.S. Congress, Office of Technology Assessment, 1993; Yin, 2008). Yin (2008) observed a marked increase in new clinical trials for drugs for rare disorders in the three years immediately after the ODA passed and a smaller increase thereafter, suggesting that the policy expedited the clinical testing of existing drug candidates but also led to a sustained increase in both basic research and clinical testing. In a subsequent paper, Yin (2009) found that in response to the ODA, manufacturers also conducted clinical trials to determine whether existing drugs could help individuals suffering from rare disorders. This repurposed drug R&D accounted for about half of the total R&D induced by the ODA (Lichtenberg & Waldfogel, 2003; Yin, 2009). Lichtenberg and Waldfogel (2003) produced evidence on the clinical benefit of the policy; they estimate that the ODA resulted in a decline in mortality among individuals with rare diseases.

Olson and Yin (2018) investigated the effects on R&D of the US Food and Drug Administration’s Pediatric Exclusivity Provision. Firms that conducted clinical studies of drugs intended for use by children were rewarded with an additional six months of market exclusivity. The authors found that drug companies responded to the policy by testing more of their drugs on children. However, this testing was conducted mainly in high revenue drugs and drugs with less remaining years of patent life, and not necessarily in those which were more medically important for children. This result confirms that, ceteris paribus, drug companies will tend to gravitate toward R&D projects where anticipated returns are highest.

A corollary is that the value of IPR to a drug company depends on market demand. Drug companies will likely be reluctant to invest in R&D if they anticipate that prices earned on new drugs will not cover their risk-adjusted costs. Consider, for instance, Kyle and McGahan’s (2012) study of the impact of an international treaty, the 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement, on clinical trial activity. Signatories to TRIPS were required to implement a minimum level of IPR, including 20-year patent terms on both products and processes, within a specified time period. These deadlines, which varied by country, were used to identify the effects of stronger IPR from confounding time-varying effects. In their econometric model, the effects of IPR on R&D activity were allowed to depend on the country’s disease burden by disease type (neglected or other) and income. (Neglected diseases include those that require new treatments and that affect disproportionately developing countries.) The authors find that patent protection is associated with greater R&D investment in diseases that affect predominantly high-income countries. TRIPS did not stimulate much additional R&D into drugs that target neglected diseases.

Qian (2007) focused on the impact of the introduction of patent protection on drug R&D for 26 countries that established pharmaceutical patent laws during 1978–2002. (Many of these countries did so to conform to the TRIPS requirements.) Similar to the results of the Kyle and McGahan study, Qian (2007) found patent protection increased drug R&D expenditures only among countries “with higher levels of economic development, educational attainment, and economic freedom.” Additionally, Qian found that there appears to be an “optimal level of intellectual property rights regulation above which further enhancement reduces innovative activities.”

Limits to the Effectiveness of IPR in Spurring Drug R&D

Qian (2007) found that after some point, extensions of pharmaceutical IPR in high income countries inhibit drug R&D. This finding is consistent with the literature on the optimal length and breadth of patents, which was reviewed by Encaoua, Guellec, and Martinez (2006). The finding is also consistent with the nature of demand for prescription drugs. Most high-income countries operate large scale public drug plans that use a variety of cost controls (Grootendorst, Hollis, & Edwards, 2014). In some jurisdictions, such as Australia, New Zealand, and the United Kingdom, the public drug plan covers all residents. In other countries, such as Canada, the public drug plans do not extend to the entire population but nonetheless account for a significant share of total drug spending.

Public drug plans can be distinguished by those that impose hard drug plan budget caps (such as New Zealand; Cumming, Mays, & Daube, 2010) and those that do not. Extensions of IPR in jurisdictions with hard caps will simply reduce the number of patented drugs that can be funded (Hollis, 2016). Countries with “soft” public drug plan budget caps, or no caps at all, tend to use health technology assessments (HTAs). These HTAs provide public plans with information on the value for money offered by new drugs; new drugs offering sufficient value over existing drugs are reimbursed. Information on value for money is summarized in an “incremental cost effectiveness ratio,” or ICER; a drug’s ICER is the ratio of the additional cost to the additional health benefits of the new drug relative to the standard of care. Health benefits are often measured using the quality adjusted life year, or QALY. Public plans will often set a limit to the additional amount that they are willing to pay per QALY. This limit, which constitutes a price ceiling, will in some cases be informed by the opportunity cost of public funds spent on prescription drugs, namely the cost per QALY gained from the use of other publicly funded health services. Thus, the maximum price per QALY from new patented drugs will be determined by the price per QALY generated from public health interventions, primary care, and other publicly funded health services. In tax-financed systems, these price per QALY thresholds are typically significantly less than what individual consumers themselves would be willing to pay (Jena & Philipson, 2008).

HTAs decrease the value of IPR for two other reasons. First, drug companies face a reduction in effective patent life while the HTA evidence assembled by the drug sponsor is being considered. For example, in Canada, the average time (across the provinces) from national marketing approval to public drug plan reimbursement was 449 days (Millson, Thiele, Zhang, Dobson-Belaire, & Skinner, 2016). Second, drug companies need to invest in the capacity to generate and present evidence on ICERS to decision-makers in each country in which they wish to sell their drug.

Government Policy That Affects Drug Company R&D Costs

Regulations Governing Drug Safety and Efficacy

The cost of clinical trials mandated by the US Food and Drug Administration (FDA) represents over one half of the capitalized cost of bringing a new drug to market (DiMasi et al., 2016; DiMasi, Hansen, & Grabowski, 2003; DiMasi, Hansen Grabowski & Lasagna, 1991). This cost includes clinical trial expenses and the considerable cost of capital tied up during the clinical testing and the FDA review phases. Clearly, then, the quality of the evidence needed to establish new drug safety and efficacy, and the time FDA takes to review this evidence, affects drug innovation. This has been confirmed in the literature review conducted by Malani and Philipson (2012).

Many of the studies highlighted in their review examined new drug approvals in the United States before and after the US government substantially increased the evidentiary requirements for the safety and efficacy of new drugs in 1962. These new requirements, referred to as the 1962 Kefauver Amendments, increased the size and complexity of clinical trials and FDA review times as well. Grabowski, Vernon, and Thomas (1978) examined the average (non-capitalized) cost per approved drug over the period 1960–1974 in both the United States and the United Kingdom (which served as a control). This average cost was calculated as the ratio of new drug approvals to a weighted average of lagged values of drug industry-wide R&D expenditures. He found that the 1962 Amendments doubled the average cost per approved drug. Presumably the Amendments would have increased capitalized costs even more since they tripled FDA review times. (FDA review times averaged only 7 months over the period 1954–1961; Grabowski et al., 1978) but varied between 20 and 30 months during the 1970s (Olson, 2003).

The net effect of the Amendments was to reduce the flow of new drug approvals, although the estimated magnitude of this decline varies markedly across the different studies. The most rigorous study on the issue, by Thomas (1990), found that the reduction in drug approvals was entirely due to the reductions in new drug development in smaller drug companies. Indeed, Thomas finds that smaller firms simply ceased to innovate after that year. Due to the reduced competition from small firms, sales rose at large drug companies, ushering in the era of “big pharma.”

There is less evidence on the impact of current FDA regulations on R&D costs and new drug development. DiMasi and colleagues (2016) present evidence that the size and complexity of the clinical trials has grown over time, as have R&D costs. These authors estimate the total capitalized cost of bringing a new drug to market during the 1970s, ’80s, ’90, and 2000s was $179 million, $413 million, $2.044 billion, and $2.558 billion, respectively. However, it is less clear to what extent FDA regulation is responsible for this cost growth. The authors do provide suggestive evidence that FDA regulations have played a role. They note that in 2008, FDA required that new diabetes drugs be tested on more patients, after a number of cardiovascular concerns emerged regarding a previously approved drug (Avandia). These more stringent regulatory standards appear to be responsible for the particularly high R&D cost of this class of drugs. They cite a study by Viereck and Boudes (2011) that found that the number of patient years in trials for new diabetes drugs increased by 400% after the guidelines were issued. In a separate study, DiMasi (2015) found that new diabetes drug development times increased by 2 years after the directive.

Malani and Philipson (2012) provide a framework to evaluate the impact on social welfare of more stringent regulation of new drugs. Regulation imposes various costs. First, as noted, more stringent regulation increases clinical trial costs. Second, as a result, fewer drugs are launched, and those that are launched are delayed, both of which reduce potential health gains. Also, some drugs submitted to the regulator are not approved, typically because of safety concerns. Yet some individuals might be willing to assume the risk to realize the health gains. Thus, more stringent regulation decreases social surplus, both from the increases in the resource cost of R&D and from the value of the health gains forgone from drugs that are delayed, not approved, or never developed.

Against these costs one counts the various benefits of more stringent regulation. First, more stringent regulation reduces drug R&D. This means that the resources that would have otherwise gone to R&D can be redeployed. Second, screening of new drugs weeds out the low-quality ones, thereby lowering the risk of potential harmful drug side effects. This in turn generates additional consumer surplus. Finally, more stringent regulation increases consumer surplus by providing patients and doctors with more information on drug quality. Malani and Philipson (2012) note that much of the information needed to quantify how drug regulation affects social welfare is dated or otherwise imperfect.

R&D Tax Credits

Since 1981, US firms have been allowed to claim a tax credit for R&D spending. McCutchen (1993) analyzed the impact of this federal tax credit on the R&D expenditures of 20 large US-based drug companies. Using firm-year specific data over the period 1975–1985, he estimated a model of R&D intensity as a function of an indicator variable that equaled one during the post-R&D tax credit period and a variety of control variables. Separate models were estimated for different types of firms distinguished by their cash flow and R&D intensity in the years prior to 1975. The estimated models implied that industry-wide, each $1 of tax credit spurred 29.3 cents of additional R&D expenditures over the period 1982–1985. This R&D response appears to be well below the R&D response observed in other industrial sectors. Hall and Reenen (2000), in their survey of the literature published in 2000, conclude that “in the current (imperfect) state of knowledge we conclude that a dollar in tax credit for R&D stimulates a dollar of additional R&D.”

Publicly Funded Basic Research

There is overwhelming evidence that publicly funded basic research has been an important input into the private discovery and development of new drugs (Cockburn & Henderson, 2001). This evidence includes bibliometric analyses, such as Sampat and Lichtenberg’s (2011) analysis of drug patent citations. Their research found that almost half of the NME patents cited either a journal publication that acknowledged research funding from a government agency or a patent acknowledging government interest. The evidence also includes detailed case histories of the development origins of various drugs. For instance, Stevens and colleagues (2011) assessed the lineages of the NMEs approved by FDA between 1990 and 2007. They estimate that 13.3% of new molecular entities (NMEs) and 21.1% of priority review NMEs (i.e., drugs given priority FDA review because of anticipated therapeutic importance) stemmed from publicly funded research that was licensed to industry. These results are consistent with work by Abramovsky, Harrison, and Simpson (2007), who found that drug R&D labs tend to cluster around highly ranked university chemistry departments in the United Kingdom, and Cohen, Nelson, and Walsh (2002), who surveyed US R&D managers on the importance of public research as a source for drug discovery programs.

Another strand of literature has attempted to quantify the impact of publicly funded biomedical research on the R&D productivity of the drug industry. An early paper by Ward and Dranove (1995) examined the effect of US National Institutes of Health (NIH) biomedical research funding on subsequent US pharmaceutical industry R&D spending across five broad therapeutic classes (allergy and infectious diseases; arthritis, musculoskeletal, skin, diabetes, digestive and kidney; cancer; cardiovascular; and neurology) and 19 years (1970–1988). The NIH is the largest public agency in the world that supports basic biomedical research and clinical trials. In 2010, the NIH invested over $20 billion in extramural biomedical research; this research is performed mainly at universities and non-profit institutions (Toole, 2012).

Their regression model of industrial R&D outlays by therapeutic class and year included as covariates one- to seven-year lagged values of both direct and indirect NIH spending. Direct spending refers to therapeutic class–specific NIH spending, while indirect spending refers to NIH spending in all other therapeutic categories. This indirect spending variable was intended to capture intercategory knowledge spillover effects. The model also included controls for the number of deaths related to the therapeutic class (intended to reflect disease severity), real personal healthcare expenditure (intended to reflect disease prevalence), the number of physicians, and predicted FDA review times (intended to capture the expected costs of complying with FDA testing requirements).

The authors report that the cumulative effect of a 1% increase in direct NIH spending is to increase industry R&D expenditures by 0.57–0.76%. The cumulative effect from indirect NIH spending is to increase industry R&D expenditures by 1.26–1.71%. The size difference in estimates was taken as evidence of the importance of intercategory spillover effects. The authors also present evidence that the effect of NIH spending on industry R&D spending operates at least partly through the production of knowledge that is disseminated in academic journal articles.

Cockburn and Henderson further explored the transmission of knowledge derived from publicly funded biomedical research to drug companies. In particular, they examined how co-authorship on academic publications between scientists working in public sector labs and scientists working in a sample of drug company labs affected the number of “important” patents received by the drug companies over the period 1980–1988 (Cockburn & Henderson, 1998). They regressed the number of patents received on the fraction of academic papers published by drug company scientists that were coauthored by academic scientists. The co-authorship variable exerted a strong effect on patenting in a variety of different model specifications. (These specifications varied in their use of company fixed effects and the inclusion of a battery of control variables.) They find that the effect of co-authorship on patenting is quite large; the estimates “imply differences in research productivity of about 30 percent between the most connected and least connected firms in the sample” (Cockburn & Henderson, 1998). Thus, Cockburn and Henderson find evidence that the productivity of drug company scientists is markedly enhanced by their collaborations with academic experts.

More recent studies have provided corroborating evidence on the impact of publicly funded biomedical R&D on drug company R&D. In these studies, drug company R&D is defined using either input-based measures (R&D expenditures or clinical trial activity) or output-based measures (i.e., new drugs). Toole (2007) analyzed how NIH research spending affects subsequent pharmaceutical industry R&D spending. Similar to Ward and Dranove’s (1995) study, data were collected by therapeutic class and year (1981–1997). Toole, however used seven, not five classes: endocrine/neoplasm, central nervous system, cardiovascular, anti-infective, gastrointestinal, dermatologic, and respiratory. He then regressed yearly private sector R&D spending on one to eight period-lagged public basic and clinical research outlays, lagged sales revenues, lagged FDA regulatory delay, time dummies, and controls for drug demand. Unlike Ward and Dranove’s study, the parameters on the NIH spending lag covariates were restricted to follow a second-degree polynomial. All model variables were expressed in year-over-year changes in logarithms; this eliminated any TC-specific fixed effects.

The parameter estimates indicate that a $1.00 increase in public basic research stimulates an additional $8.38 of industry R&D investment after 8 years. The pattern of additional industry R&D investment initially increases, then plateaus, then increases again after year 6. Toole attributes this pattern to companies’ absorbing the information and determining how it might be useful, then holding spending constant during the time in which “scientific and market uncertainties” are resolved, and then acting on this information. This explanation seems incomplete for the simple fact that most basic research sponsored by the NIH likely would not markedly increase applied industrial R&D by year 6. On the other hand, it could be the case that small number of discoveries that are useful induce a very large increase in industry R&D. Had data on industry outlays on specific types of R&D (such as drug development, formulation, and clinical trials) been available, then Toole could have conducted various falsification tests on his model. For instance, one would not expect to see any impact of NIH-funded research on stage III clinical trials during the 8 years.

Toole (2007) finds that, compared to basic research, the industry R&D response to public clinical research is shorter in duration and smaller in magnitude. The parameter estimates indicate that a $1.00 increase in public clinical research stimulates an additional $2.35 of industry R&D investment within 3 years and no response thereafter. These results make intuitive sense; companies are likely able to quickly assess the applicability of clinical trial results to their R&D programs.

In a subsequent paper, Toole re-examined the relationship between publicly funded basic biomedical research and subsequent industry drug R&D (Toole, 2012). The primary innovation in this paper was to adopt an output-based measure of R&D, namely the number of NMEs approved in the United States. Basic research was measured as the lagged stock of NIH research spending. The model included controls for the lagged stock of drug company R&D investment and market size, which was constructed as the potential market size measure used by AL. All observations were made at the level of therapeutic category c (6 broad categories) and year t (1980–1997). The model included therapeutic category and year dummies. The stock variables measured for category c and year t were simply the sum totals of real research expenditures for category c from the 1960s (from 1963 for the public stock and 1968 for the private) to t. Thus, Toole assumed no obsolescence of knowledge capital. His preferred model implied that a 1% increase in the stock of public basic research funding was associated with a 1.8% increase in NME approvals after a lag of 17 years. Using sales data for the average NME, he estimates the total private return to public basic research outlays to be 43%.

More recently, Blume-Kohout examined how publicly funded biomedical research affects subsequent clinical trials of drugs used to treat particular diseases (Blume-Kohout, 2012). Earlier studies examined R&D spending for a handful of broad therapeutic categories, such as cardiovascular disease or respiratory disease. Her data consisted of NIH extramural research grant funding for 67 diseases from 1975 through 2006. She found that a sustained 10% increase in disease-specific NIH funding yields approximately a 4.5% increase in the number of related drugs entering clinical testing (phase I trials) after a lag of up to 12 years. She did not find any effect of NIH funding, however, on the more expensive, late-stage (phase III) trials. This could be due to the high rate of attrition of drug candidates; according to DiMasi and colleagues (2016), only 21% of drugs tested in humans between 1995 and 2007 advanced to phase III trials. An even smaller percentage, 12%, received FDA approval.

Open Public–Private Drug Development Partnerships

The work by Cockburn and Henderson, reviewed earlier, highlighted how collaborations between industrial and academic scientists on basic research enhanced the productivity of drug company labs. Several initiatives are expanding on this idea, creating larger-scale collaborations between groups of academic and industrial scientists. Unlike the collaborations that Cockburn focused on, however, the findings produced by these newer collaborations are placed in the public domain, unencumbered by IP restrictions. The findings on human pathophysiology and pharmacology derived from these collaborations are, in turn, inputs that all drug companies can freely use for their own drug development programs. Funding comes from both government and industry.

The open source aspect of these collaborations attempts to improve overall R&D productivity. Obviously, the strategy to keep results secret until such time that the results can be protected by IP is, in the short run, the dominant strategy. And industry and universities generally use this strategy. However, if all actors play this strategy, drug discovery is impeded, for several reasons. First, this approach generates much duplicative research, which is inefficient if the research idea ends up as a dead end. Second, research secrecy impedes the independent verification of findings, and without verification, received results are sometimes incorrect (Owens, 2016). Third, asserting IP rights over basic biomedical discoveries via litigation is expensive and can slow down research while competing IP ownership claims are adjudicated by the courts. For example, Bubela, Vishnubhakat, and Cook-Deegan (2015) provide evidence that litigation over IP awarded to a gene mutation patent held up research into drugs for Alzheimer’s disease. Fourth, even if there is no disagreement over the ownership of the IP protecting these inputs, this IP can inhibit subsequent R&D because of the high prices demanded of rights-holders or the transactions costs of negotiating licensing agreements.

Williams (2013) examined this issue in the context of IP protections awarded to the human genes sequenced by the private firm Celera. This IP enabled Celera to sell its data to commercial and academic users and required firms to negotiate licensing agreements with Celera for any resulting commercial discoveries (such as genetic tests). However, Celera’s IP rights lasted only two years; the genomic data moved into the public domain once it was resequenced by a public sector initiative. Williams tracked the R&D (scientific research and product development) during 2001–2009 associated with genes held by Celera and genes held in the public domain. Williams then derived estimates on the basis of a (covariate adjusted) cross-sectional comparison: the subsequent R&D levels of Celera genes and public domain genes and a difference-in-differences comparison using subsequent R&D levels of Celera genes before and after the removal of IPR and the corresponding change in R&D levels for the public domain genes. She found that Celera’s gene-level IPR led to reductions in subsequent scientific research and product development on the order of 20 to 30%.

What activities would these open consortia engage in? Among other activities, the research partnerships would identify which cellular proteins are suitable targets for new drugs. As Edwards and colleagues (2011) note, existing drug development programs tend to focus on a small number of targets. But many more are simply ignored by academic scientists, including those proteins that have been directly associated with disease. As evidence, they note that 11 protein kinases are key nodes in the signaling pathways associated with breast cancer. Yet based on a survey of academic papers published in 2009, one of these kinases, CDC2, received more attention than seven of the others combined.

Assessing which proteins are suitable targets requires various research tools. Edwards and colleagues (2011) describe some of these:

For instance, antibodies can help them identify where in the body the protein is being expressed; chemical inhibitors can be used to block a protein’s activity in human cells and in animal models. These antibodies and small molecules also provide a launch pad for the development of new medicines by the biotechnology and pharmaceutical industries.

But, as the authors note, these research tools are not available for many interesting proteins because of the high cost of developing them.

Making such tools readily available for all proteins could increase the productivity of drug company labs. According to Edwards and colleagues (2011), this basic research is best done in the context of an open collaboration between industrial and academic scientists, with funding from both government and industry. As evidence, they point to a recent collaboration between their academic group, the Structural Genomics Consortium, and the medicinal chemistry experts from the drug company GSK to better characterize the bromodomains protein family. In 12 months, the collaboration produced a chemical inhibitor, JQ1, that shows therapeutic potential in the treatment of NUT midline carcinoma—an incurable rare cancer. Additional inhibitors developed with GSK and Pfizer have linked members of the bromodomains protein family to leukaemia, multiple myeloma, HIV infection, and several other diseases (Delmore et al., 2011).

Discussion

This article highlights the significant impact that some public policies have on the R&D undertaken by the pharmaceutical industry. Identifying which policies can better promote this R&D is important given the declining profitability of industrial drug R&D. In particular, Berndt and colleagues (Berndt, Nass, Kleinrock, & Aitken, 2015) examined the distribution of global sales revenues of drugs launched in the 5-year time periods of 1991–1994, 1995–1999, 2000–2004, and 2005–2009 and compared these revenues to R&D costs. They find that the net returns have been declining; the average new drug now earns just enough to cover costs. They also found that a declining fraction of pharmaceutical drugs produce sufficient sales revenues to cover development costs. Companies are therefore reliant on a declining number of drugs whose sales revenues cover the costs of their entire drug R&D program. At the same time, DiMasi and colleagues have found that a declining fraction of drug candidates developed in-house by drug companies receive FDA approval. Indeed, the clinical stage success rates of drugs first tested in humans over the period 1995–2007 is only 12% (DiMasi et al., 2016); this success rate is half of the rate for drugs first tested in humans over the period 1970–1982 (DiMasi et al., 1991). Both of these trends are concerning.

This article points to the government policies that might reverse these trends. First, there is very clear evidence that publicly funded basic science is an important input into private drug discovery. Investments in the NIH and similar institutions in other countries should lower the social cost of drug R&D. There are also a growing number of public–private partnerships that disseminate into the public domain insights into the cellular pathways through which new drugs can ameliorate disease. The emerging evidence is that public support for these initiatives will improve the design of new drugs and markedly increase their chance of demonstrating safety and efficacy in clinical trials. Expansions of drug coverage to underinsured populations also has an important effect on drug company sales revenues and R&D incentives.

This review also indicates which policy levers are less effective. There is limited information on the effectiveness of R&D tax credits. The evidence that does exist, however, suggests that tax credits are not particularly effective in stimulating drug R&D. Extensions of intellectual property rights beyond current levels appear to have a small, possibly negative payoff. One reason for this is that the ability of drug companies to charge high prices appears to be mitigated by the monopsony power of large public drug plans and, in the United States, the buying power of pharmacy benefit managers.

References

Abramovsky, L., Harrison, R., & Simpson, H. (2007). University research and the location of business R&D. Economic Journal, 117(519), C114–C141.Find this resource:

Acemoglu, D., & Linn, J. (2004). Market size in innovation: Theory and evidence from the pharmaceutical industry. Quarterly Journal of Economics, 119(3), 1049–1090.Find this resource:

Berenson, A. (2007). Merck agrees to settle Vioxx suits for $4.85 billion. New York Times.Find this resource:

Berndt, E. R., Nass, D., Kleinrock, M., & Aitken, M. (2015). Decline in economic returns from new drugs raises questions about sustaining innovations. Health Affairs, 34(2), 245–252.Find this resource:

Blume-Kohout, M. E. (2012). Does targeted, disease-specific public research funding influence pharmaceutical innovation? Journal of Policy Analysis & Management, 31(3), 641–660.Find this resource:

Blume-Kohout, M. E., & Sood, N. (2013). Market size and innovation: Effects of Medicare Part D on pharmaceutical research and development. Journal of Public Economics, 97, 327–336.Find this resource:

Bubela, T., Vishnubhakat, S., & Cook-Deegan, R. (2015). The mouse that trolled: The long and tortuous history of a gene mutation patent that became an expensive impediment to Alzheimer’s research. Journal of Law and the Biosciences, 2(2), 213–262.Find this resource:

Cockburn, I., & Henderson, R. (2001). Publicly funded science and the productivity of the pharmaceutical industry. In A. Jaffe, J. Lerner, & S. Stern (Eds.), Innovation policy and the economy (Vol. 1). London, U.K.: MIT Press.Find this resource:

Cockburn, I. M., & Henderson, R. M. (1998). Absorptive capacity, coauthoring behavior, and the organization of research in drug discovery. Journal of Industrial Economics, 46(2), 157–182.Find this resource:

Cohen, W. M., Nelson, R. R., & Walsh, J. P. (2002). Links and impacts: The influence of public research on industrial R&D. Management Science, 48(1), 1–23.Find this resource:

Cumming, J., Mays, N., & Daube, J. (2010). How New Zealand has contained expenditure on drugs. BMJ, 340, 2441.Find this resource:

Cutler, D. (2005). Your money or your life: Strong medicine for America’s health care system. New York, NY: Oxford University Press.Find this resource:

Damodaran, A. (2017). Betas by sector.

Delmore, J. E., Issa, G. C., Lemieux, M. E., Rahl, P. B., Shi, J., Jacobs, H. M., et al. (2011). BET bromodomain inhibition as a therapeutic strategy to target c-Myc. Cell, 146(6), 904–917.Find this resource:

DiMasi, J. A. (2015). Regulation and economics of drug development.

DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20–33.Find this resource:

DiMasi, J. A., Hansen, R. W., & Grabowski, H. G. (2003). The price of innovation: New estimates of drug development costs. Journal of Health Economics, 22(2), 151–185.Find this resource:

DiMasi, J. A., Hansen, R. W., Grabowski, H. G., & Lasagna, L. (1991). Cost of innovation in the pharmaceutical industry. Journal of Health Economics, 10(2), 107–142.Find this resource:

Dubois, P., de Mouzon, O., Scott-Morton, F., & Seabright, P. (2015). Market size and pharmaceutical innovation. RAND Journal of Economics, 46(4), 844–871.Find this resource:

Duggan, M., & Morton, F. S. (2010).The effect of Medicare Part D on pharmaceutical prices and utilization. American Economic Review, 100(1), 590–607.Find this resource:

Edwards, A., Isserlin, R., Bader, G., Frye, S., Willson, W., & Yu, F. (2011).Too many roads not taken. Nature, 470, 163–165.Find this resource:

Encaoua, D., Guellec, D., & Martinez, C. (2006). Patent systems for encouraging innovation: Lessons from economic analysis. Research Policy, 35(9), 1423–1440.Find this resource:

Finkelstein, A. (2004). Static and dynamic effects of health policy: Evidence from the vaccine industry. Quarterly Journal of Economics, 119(2), 527–564.Find this resource:

Giaccotto, C., Santerre, R., & Vernon, J. (2005). Drug prices and research and development investment behavior in the pharmaceutical industry. Journal of Law & Economics, 48(1), 195–214.Find this resource:

Golec, J., Hegde, S., & Vernon, J. A. (2010). Pharmaceutical R&D spending and threats of price regulation. Journal of Financial and Quantitative Analysis, 45(1), 239–264.Find this resource:

Grabowski, H., & Vernon, J. (2000).The determinants of pharmaceutical research and development expenditures. Journal of Evolutionary Economics, 10(1), 201–2015.Find this resource:

Grabowski, H. G., Vernon, J. M., & Thomas, L. G. (1978). Estimating the effects of regulation on innovation: An international comparative analysis of the pharmaceutical industry. Journal of Law and Economics, 21(1), 133–163.Find this resource:

Grootendorst, P., Hollis, A., & Edwards, A. (2014). Patents and other incentives for pharmaceutical innovation. In A. Culyer (Ed.), Encyclopedia of health economics. London, U.K.: Elsevier.Find this resource:

Hall, B., & Reenen, J. (2000). How effective are fiscal incentives for R&D? A review of the evidence. Research Policy, 29(4), 449–469.Find this resource:

Hollis, A. (2016). Sustainable financing of innovative therapies: A review of approaches. Pharmacoeconomics, 34(10), 971–980.Find this resource:

Jena, A. & Philipson, T. (2008). Cost-effectiveness analysis and innovation. Journal of Health Economics, 27(5), 1224–1236.Find this resource:

Kyle, M., & McGahan, A. (2012). Investments in pharmaceuticals before and after TRIPS. Review of Economics and Statistics, 94(4), 1157–1172.Find this resource:

Lee, M., & Choi, M. (2015). The determinants of research and development investment in the pharmaceutical industry: Focus on financial structures. Osong Public Health & Research Perspectives, 6(5), 302–309.Find this resource:

Lichtenberg, F. R. (2005a). The impact of new drug launches on longevity: Evidence from longitudinal, disease-level data from 52 countries, 1982–2001. International Journal of Health Care Finance & Economics, 5(1), 47–73.Find this resource:

Lichtenberg, F. R. (2005b). Pharmaceutical innovation and the burden of disease in developing and developed countries. Journal of Medicine & Philosophy, 30(6), 663–690.Find this resource:

Lichtenberg, F. R, & Waldfogel, J. (2003). Does misery love company? Evidence from pharmaceutical markets before and after the Orphan Drug Act. Michigan Telecommunications and Technology Law Review, 15(2), 335–357.Find this resource:

Malani, A., & Philipson, T. (2012). The regulation of medical products. In P. Danzon & S. Nicholson (Eds.), The Oxford handbook of the economics of the biopharmaceutical industry (pp. 100–142). Oxford, U.K.: Oxford University Press.Find this resource:

Malmberg, C. (2008). R&D and financial systems: The determinants of R&D expenditures in the Swedish pharmaceutical industry. Paper no. 2008/01. Centre for Innovation, Research and Competence in the Learning Economy (CIRCLE), Lund University.Find this resource:

McCutchen, W. W. (1993). Estimating the impact of the R&D tax credit on strategic groups in the pharmaceutical industry. Research Policy, 22(4), 337–351.Find this resource:

Millson, B., Thiele, S., Zhang, Y., Dobson-Belaire, W., & Skinner, B. (2016). Access to new medicines in public drug plans: Canada and comparable countries.

Moos, W. H., & Mirsalis, J. C. (2009). Nonprofit organizations and pharmaceutical research and development. Drug Development Research, 70(7), 461–471.Find this resource:

NewsFeature. (2011). Learning lessons from Pfizer’s $800 million failure. Nature Reviews Drug Discovery, 10(3), 163–164.Find this resource:

Olson, M. K. (2003). Explaining reductions in FDA drug review times: PDUFA matters. Health Affairs (July–December, Suppl.), W4-S1–W4-S2.Find this resource:

Olson, M., & Yin, N. (2018). Examining firm responses to R&D policy: An analysis of pediatric exclusivity. American Journal of Health Economics, 4(3), 321–357.Find this resource:

Outterson, K., Powers, J. H., Daniel, G. W., & McClellan, M. B. (2015). Repairing the broken market for antibiotic innovation. Health Affairs, 34(2), 277–285.Find this resource:

Owens, B. (2016). Data sharing: Access all areas. Nature, 533(7602), S71–S72.Find this resource:

Power, E. (2006). Impact of antibiotic restrictions: The pharmaceutical perspective. Clinical Microbiology and Infection, 12(Suppl. 5), 25–34.Find this resource:

Qian, Y. (2007). Do national patent laws stimulate domestic innovation in a global patenting environment? A cross-country analysis of pharmaceutical patent protection, 1978–2002. Review of Economics and Statistics, 89(3), 436–453.Find this resource:

Sampat, B. N., & Lichtenberg, F. R. (2011). What are the respective roles of the public and private sectors in pharmaceutical innovation? Health Affairs, 30(2), 332–339.Find this resource:

Scherer, F. M. (2001). The link between gross profitability and pharmaceutical R&D spending. Health Affairs, 20(5), 216–220.Find this resource:

Sood, N., de Vries, H., Gutierrez, I., Lakdawalla, D. N., & Goldman, D. P. (2009). The effect of regulation on pharmaceutical revenues: Experience in nineteen countries. Health Affairs, 28(1), w125–137.Find this resource:

Stevens, A. J., Jensen, J. J., Wyller, K., Kilgore, P. C., Chatterjee, S., & Rohrbaugh, M. L. (2011). The role of public-sector research in the discovery of drugs and vaccines. New England Journal of Medicine, 364(6), 535–541.Find this resource:

Thomas, L. G. (1990). Regulation and firm size: FDA impacts on innovation. Rand Journal of Economics, 21(4), 497–517.Find this resource:

Toole, A. (2007). Does public scientific research complement private investment in research and development in the pharmaceutical industry? Journal of Law and Economics, 50(1), 81–104.Find this resource:

Toole, A. (2012). The impact of public basic research on industrial innovation: Evidence from the pharmaceutical industry. Research Policy, 41(1), 1–12.Find this resource:

U.S. Congress, Office of Technology Assessment. (1993). Pharmaceutical R&D: Costs, risks, and rewards. OTA-H-522. Washington, DC: U.S. Government Printing Office.Find this resource:

Vernon, J. A. (2005). Examining the link between price regulation and pharmaceutical R&D investment. Health Economics, 14(1), 1–16.Find this resource:

Viereck, C., & Boudes, P. (2011). An analysis of the impact of FDA’s guidelines for addressing cardiovascular risk of drugs for type 2 diabetes on clinical development. Contemporary Clinical Trials, 32(3), 324–332.Find this resource:

Ward, M. R., & Dranove, D. (1995). The vertical chain of research and development in the pharmaceutical industry. Economic Inquiry, 33, 70–87.Find this resource:

Williams, H. L. (2013). Intellectual property rights and innovation: Evidence from the human genome. Journal of Political Economy, 121(1), 1–27.Find this resource:

Wizemann, T., Robinson, S., & Giffin, R. (2009). Breakthrough business models: Drug development for rare and neglected diseases and individualized therapies: Workshop summary. Washington, DC: National Academies Press.Find this resource:

Wong, C. H., Siah, K. W., & Lo, A. W. (2019). Estimation of clinical trial success rates and related parameters. Biostatistics, 20(2), 273–286.Find this resource:

Yin, W. (2008). Market incentives and pharmaceutical innovation. Journal of Health Economics, 27(4), 1060–1077.Find this resource:

Yin, W. (2009). R&D policy, agency costs and innovation in personalized medicine. Journal of Health Economics, 28(5), 950–962.Find this resource: