Thursday, June 11, 2009

Be careful what you wish for.....

The End of Medical Miracles?
Tevi Troy
June 2009

Americans have, at best, a love-hate relationship with the life-sciences industry—the term for the sector of the economy that produces pharmaceuticals, biologics (like vaccines), and medical devices. These days, the mere mention of a pharmaceutical manufacturer seems to elicit gut-level hostility. Journalists, operating from a bias against industry that goes as far back as the work of Upton Sinclair in the early years of the 20th century, treat companies from AstraZeneca to Wyeth as rapacious factories billowing forth nothing but profit. At the same time, Americans are adamant about the need for access to the newest cures and therapies and expect new cures and therapies to emerge for their every ailment—all of which result from work done primarily by these very same companies whose profits make possible the research that allows for such breakthroughs.

Liberals and conservatives appear to agree on the need to unleash the possibilities in medical discovery for the benefit of all. But it cannot be ordered up at will. It takes approximately ten years and $1 billion to get a new product approved for use in the United States. Furthermore, only one in every 10,000 newly discovered molecules will lead to a medication that will be viewed favorably by the Food and Drug Administration (FDA). Only three out of every ten new medications earn back their research-and-development costs. The approval success rates are low, and may even be getting lower—30.2 percent for biotech drugs and 21.5 percent for small-molecule pharmaceuticals.

It is the very nature of scientific discovery that makes this process so cumbersome. New developments do not appear as straight-line extrapolations. A dollar in research does not lead inexorably to a return of $1.50. Researchers will spend years in a specific area to no avail, while other areas will benefit from a happy concatenation of discoveries in a short period. It is impossible to tell which area will be fruitless; so many factors figure into the equation, including dumb luck. Alexander Fleming did not mean to leave his lab in such disarray that he would discover that an extract from moldy bread killed bacteria, yet that is how it happened. Conversely, if effort and resources were all it took, then we would have an HIV/AIDS vaccine by now; as it stands, the solution to that problem continues to elude the grasp of some of the most talented and heavily funded researchers.

Scientific discoveries are neither inevitable nor predictable. What is more, they are affected, especially in our time, by forces outside the laboratory—in particular, the actions of politicians and government bureaucracies. The past quarter-century has offered several meaningful object lessons in this regard. For example, in the 1980s, the Reagan administration undertook a number of actions, both general and specific, that had a positive effect on the pace of discovery. On the general front, low taxes and a preference for free trade helped generate a positive economic climate for private investment, including in the rapidly growing health-care sector. More specifically, the Reagan administration engaged in new technology transfer policies to promote joint ventures, encouraged and passed the Orphan Drug Act to encourage work on products with relatively small markets, and accelerated approval and use of certain data from clinical trials in order to hasten the approval of new products. All of these initiatives helped foster discovery.

That which the government gives, it can also take away. As the 1990s began, a set of ideas began to gain traction about health care and its affordability (it seems hard to believe, but the first election in which health care was a major issue was a Pennsylvania Senate race only eighteen years ago, in 1991). Americans began to fear that their health-care benefits were at risk; policymakers and intellectuals on both sides of the ideological divide began to fear that the health-care system was either too expensive or not comprehensive enough; and the conduct of private businesses in a field that now ate up nearly 14 percent of the nation’s gross domestic product came under intense public scrutiny.

A leading critic of Big Pharma, Greg Critser, wrote in his 2007 Generation Rx that President Clinton picked up on a public discomfort with drug prices and “began hinting at price controls” during his first term in office. These hints had a real impact. As former FDA official Scott Gottlieb has written, “Shortly after President Bill Clinton unveiled his proposal for nationalizing the health-insurance market in the 1990s (with similar limits on access to medical care as in the [current] Obama plan), biotech venture capital fell by more than a third in a single year, and the value of biotech stocks fell 40 percent. It took three years for the ‘Biocentury’ stock index to recover. Not surprisingly, many companies went out of business.”

The conduct of the businesses that had been responsible for almost every medical innovation from which Americans and the world had benefited for decades became intensely controversial in the 1990s. An odd inversion came into play. Since the work they did was life-saving or life-enhancing, it was not deemed by a certain liberal mindset to be of special value, worth the expense. Rather, medical treatment came to be considered a human right to which universal access was required without regard to cost. Because people needed these goods so much, it was unscrupulous or greedy to involve the profit principle in them. What mattered most was equity. Consumers of health care should not have to be subject to market forces.

And not only that. Since pharmaceuticals and biologics are powerful things that can do great harm if they are misused or misapplied, the companies that made them found themselves under assault for injuries they might have caused. It was little considered that the drugs had been approved for use by a federal agency that imposed the world’s most rigorous standards, and was often criticized for holding up promising treatments (especially for AIDS). Juries were convinced that companies had behaved with reckless disregard for the health of consumers, and hit them with enormous punitive damages claims.

The late 1990s also coincided with an unpredictable slowdown in the pace of medical discovery, following a fertile period in which new antihistamines, antidepressants, and gastric-acid reducers all came to market and improved the quality of life of millions in inestimable ways. A lull in innovation then set in, and that in turn gave opponents of the pharmaceutical industry a new target of opportunity. An oft-cited 1999 study by the National Institute for Health Care Management (NIHCM) claimed that the newest and costliest products were only offering “modest improvements on earlier therapies at considerably greater expense.”
The NIHCM study opened fresh lines of attack. The first came from the managed-care industry, which used it as a means of arguing that drugs had simply grown too expensive. Managed care is extremely price-sensitive, and its business model is built on cutting costs; executives of the industry were well represented on the board of the institute that put out the report. They were, in effect, fighting with the pharmaceutical companies over who should get more of the consumer’s health-care dollars.

The second came in response to the approval by the FDA in 1997 of direct consumer advertising of pharmaceuticals. The marketing explosion that followed it gave people the sense that these companies were not doing life-saving work but were rather engaged in the sale of relative trivialities, like Viagra and Rogaine, on which they had advertising dollars to burn that would be better spent on lowering the cost of drugs. And the third element of this mix was the rise of the Internet, which gave Americans a level of price transparency that they had not had before regarding cost differentials between drugs sold in the U.S. versus Canada and other Western countries.

These three factors precipitated a full-bore campaign by public interest groups that bore remarkable fruit over the next several years. By February 2004, Time magazine was publishing a cover story on pharmaceutical pricing, noting that “the clamor for cheap Canadian imports is becoming a big issue.” Marcia Angell, a fierce critic of the pharmaceutical industry and the FDA, wrote in the New York Review of Books in 2004 that, “In the past two years, we have started to see, for the first time, the beginnings of public resistance to rapacious pricing and other dubious practices of the pharmaceutical industry.”

Harvard’s Robert Blendon released a Kaiser Family Foundation poll in 2005 in which 70 percent of Americans reported feeling that “drug companies put profits ahead of people” and 59 percent saying that “prescription drugs increase overall medical costs because they are so expensive.” Overall, noted the foundation’s president, Drew Altman, “Rightly or wrongly, drug companies are now the number one villain in the public’s eye when it comes to rising health-care costs.”

About the Author
Tevi Troy, deputy secretary of the United States Department of Health and Human Services from 2007 to 2009, is a visiting senior fellow at the Hudson Institute.

No comments:

Post a Comment