During the past 70 years, advances in many scientific and technological areas should have dramatically raised the productivity of biopharma R&D. For example, since the first bacteriophage genome was determined in the 1970’s, DNA sequencing has become over a billion times faster, improving the industry’s ability to identify new drug targets.
However, since 1950, the number of drugs approved per $1 billion spent on their development has halved every 9 years – this is an 80-fold decrease after inflation is taken into account. This steady decline can be described as following ‘Eroom’s Law’ – the opposite of Moore’s Law.
Starting around 2010, Eroom’s Law seemingly begun to ‘break’ and the trend line changed. This turn-around was thought to largely be due to better knowledge from advanced technologies, such as the advent of genome-wide association studies (GWAS) or the invention of CRISPR gene editing. It was the first time since the 1950’s that the cost of bringing a new drug to market showed signs of stabilizing.
Eroom’s Law, showing the number of new drugs entering the market per $1 billion has steadily decreased, until the 2010’s. Image credit: S. Wickramasekara, 2020
Although the average cost of developing a new drug has decreased in the last decade, the productivity of the pharma industry has rapidly declined. The average return on investment for new drugs developed by several large biopharma firms was around 10% in 2010, but dropped to less than 2% by 2019. This dramatic decrease has led pharma companies to rethink the entire R&D process.
Karen Taylor, Director of Deloitte Centre for Health Solutions, commented: “The unrelenting decline in forecast peak sales and expanding regulatory requirements, combined with fewer barriers for new entrants, all demonstrate that substantive change is needed, especially in shortening R&D cycle times. This will require companies to develop core capabilities that are entirely different from today. This includes proficiency in accessing, analysing and interpreting the increasing number of large datasets.”
Biopharma’s productivity issues start with poor preclinical R&D
Preclinical R&D is a stage of research before clinical trials whereby information about drug safety is collected, typically in laboratory animals. The main goals are to determine a safe starting dose for testing in humans and to assess the potential toxicity of a product.
Over 40% of the overall cost of drug development is spent on preclinical R&D. That’s unsurprising if, on average, only one in every 5,000 compounds that enters preclinical trials becomes an approved drug. Furthermore, at least 50% of preclinical experiments in are deemed unproductive. These studies take up valuable scientists’ time and increase an organization’s material spend.
It is clear that inefficiency and productivity challenges are common when designing preclinical R&D procedures, but unfortunately, they are often overlooked. Some researchers believe that the high rate of unproductive experiments is just the norm as this is an exploratory stage of drug development – but this isn’t always the case. Poor experimental deign, unreliable protocols, lack of detailed reporting and reduced transparency in internal data are all avoidable expenditures. One way to address these issues is to adopt emerging digital technologies.
Poor reagent selection
The most significant pitfall of preclinical R&D is the use of inappropriate reagents, which cost over $17 billion and account for over 36% of all unproductive experiments. Biopharma companies are thought to spend up to $35 million on biological reagents per asset, many of which are not necessary. And this does not account for human capital expenditure.
There are no regulatory boards that control how biological reagents are developed or used. Therefore, related data is often disorganized or incomplete, making manually mining the information extremely time consuming. In fact, most researchers carry out a ‘shot gun’ approach. This is when several different reagents are purchased for a single purpose and subsequently have be validated, taking vast amounts of time with no guarantee of productive results. Also, the excess reagents increase the spend on materials, on top the chemicals used for validation testing.
Better preclinical decision-making
Scientists are able to make better decisions with access to better information, which in turn leads to a higher rate of project success. There is a vast amount of information in the scientific space – millions of experimental data points are spread across various online literature. However, sifting through all this data manually and gaining access to relevant information to a specific context can be extremely challenging. One solution is to harness computers, which have the capability to sort through data much faster than any human ever could.
In 2018, AstraZeneca embarked on a significant revision of its R&D strategy with the aim of improving productivity. A large part of the revised strategy was to focus on decision-making for five technical determinants – the right target, right tissue, right safety, right patient and right commercial potential. This ‘5R framework’ was hoped to optimize drug safety testing and drug nomination to clinical trials by giving scientists access to the right information. This meant that they could validate or invalidate their hypotheses at inception, thus saving time and money.
Overall, the continued evolution and application of the 5R framework has begun to show a path towards more efficient drug discovery and development. In five years, AstraZeneca reported a five-fold improvement in the proportion of pipeline guide molecules that advanced from the preclinical studies to the completion of Phase III trials – from 4% to 19%.
An overview of the 5R framework. Image credit: AstraZeneca
Why do half of clinical trials fail?
Clinical trials are a key part of the drug development lifecycle and are important for the causal estimation of efficacy and safety. They can test new medical treatments or devices, and are always conducted on humans. However, more than half of drugs that enter into clinical trials fail. This is because they are currently faced with a number of operational inefficiencies that can hinder the research taking place, leading to additional costs and extended trial timelines.
Reasons for poor productivity of clinical trials
- Unproductive trial team: It has been recognised that all too often project managers of clinical trials are placed into the role with little or no experience in running or costing large studies. This leads to weak project planning, implementation failures, lack of stakeholder engagement and unrealistic timelines. Team efficiency is also negatively impacted, with high employment turnover being one of the major contributors to dysfunctional teams.
- Inflexible protocols: Currently, clinical trials do not allow for much flexibility because strict procedures must be followed – changes can only be made once the study sponsors are well informed. This means that even if the trial may benefit from a slight shift in a process, it is often impossible or just requires too much effort. Furthermore, the complexity of trial protocol can cause issues. Trying to explore too many questions in a single study is a regular problem that leads to failure.
- Patient burden: Project management is often conflicted between hitting recruitment targets and abiding by the eligibility of volunteers. This leads to huge problems in biopharma companies, with about 50% of clinical trials failing to initially recruit enough patients, and 85% of trials failing to retain enough participants. Only about 8% of cancer patients enrol in cancer drug trials. This is mainly due to people not living close to the research site, not being mobile enough to travel or because of scheduling constraints. All of these barriers mean that, for some patients, participating in clinical trials can be expensive, or even impossible. The lack of volunteers is likely to cause significant delays in the study and potentially generate biased or unusable data.
- Poor data quality: High quality data is the key to any successful study closure.Therefore, the lack of data quality monitoring throughout an experiment is likely to have regrettable consequences. The core document in a clinical trial is called the case report form, which includes all the clinical and non-clinical data relating to each patient. Around 50% of studies still rely on paper case report forms, despite electronic data capture existing for nearly two decades now. This oversight is thought to increase the duration of clinical development by up to 30%, alongside intensifying the amount of missing data and errors.Moreover,each facility has their own rigid methods of data collection, hence the use of different approaches and a lack of standardisation across the industry often results in unreliable data.
To read about how the digital transformation of biopharma companies, including aspects of clinical trials, could help to alleviate some of these issues, download the Digital Transformation in Biopharma report here:
Image credit: BioSpace