Please note the transcript has been edited for brevity and clarity.
FLG: Hello, everyone, and welcome to our interview with our expert panellists from pharma. Today we will be discussing the strengths of single-cell and spatial omics in drug discovery. Before we begin our discussion, I’d like to invite our panellists to introduce themselves and their work.
Rebecca Mathew: Sure, my name is Becky Mathew. I am a Principal Scientist at Merck research labs. I lead a group in the Genome and Biomarker Sciences department focused on building stem cell-derived model systems for genetic target validation for neuroimmunology and immunology.
Alex Tamburino: Hi, I’m Alex Tamburino. I’m also in the Genome and Biomarker Sciences department at Merck. I’m based in West Point, Pennsylvania. I work with Becky and many others to conduct various single-cell and spatial transcriptomics experiments. Currently, I’m the director of the Spatial Transcriptomics and Single Cell Multi-omics Sequencing team in our Genome Sciences group.
Mathew Chamberlain: I’m Matthew Chamberlain, a senior scientist in the Immune Pathway Discovery Group at Janssen. Currently, I mostly specialise in analysing single-cell in the early discovery space in immunology.
FLG: Super, thank you very much. We have seen a massive boom of interest in the single cell and spatial technologies and how they can be used to decipher disease biology. Can you explain some of the strengths of these technologies in drug discovery and also explain why pharma is investing so much time and effort into these kinds of studies?
Matthew Chamberlain: Great question. Disease is very complicated to study. There’s a huge amount of data, a huge number of patients available, and emerging technologies let us look at these patients with single-cell resolution. Historically, many of these diseases have been defined by the tissues in which they manifest. If you have a gut issue, you’d go to a GI; if you have a skin issue, you’d go to a rheumatologist, etc. What’s becoming increasingly clear is that these diseases have many more common pathways and molecular signatures than we had appreciated before.
Johnson and Johnson made a drug ten years ago called Remicade, a TNF inhibitor. It transformed treatment for a lot of different autoimmune conditions. Single-cell data are central to that idea because now you can deconstruct patients with different diseases in different tissues at the single cell level. We can also study common features across patients and diseases and potentially launch drugs in many indications at once by pooling together and forming these large single-cell analyses. Single-cell data is central to that new pathway-centric way of thinking about disease.
Rebecca Mathew: To capture the significance of this type of technology in the neuroimmunology space and to follow on from what Matthew has just said – there have been tremendous discoveries in characterising the sub-states of these cells as they interact and associate with both acute and chronic stressors. There’s been the discovery using single-cell sequencing technologies of disease-associated microglia cell populations, which has transformed our understanding of the responsiveness of the cell type to pathologies that exist in Alzheimer’s disease, for example.
Alex Tamburino: I agree single cell is very powerful. It has enabled us to achieve unbiased whole transcriptomic profiling from single cells. It’s unlocked a lot of biology in low abundance or rare cell types. The major drawback of single-cell is the need to dissociate tissues into individual cells before profiling, divorcing the cells from each other and the disease pathology. The field, including ourselves, is investing into spatial transcriptomics because we can profile cellular microenvironments and individual cells while retaining spatial information. This enables us to understand how cellular neighbourhoods and disease pathology impact cell abundances, cell states and gene expression.
FLG: Super, thank you. Spatial transcriptomics is definitely taking hold of people’s interest. There is also a lot of investment in terms of time and money into these technologies. Can you explain a little more about how spatial transcriptomics plays a role in drug discovery and development?
Alex Tamburino: I’d be happy to explain a little. Spatial transcriptomics has the potential to impact all stages of the drug discovery pipeline. Our group here at Merck utilises innovative genetic screening, multi-omics, single-cell approaches and spatial transcriptomics to impact preclinical and clinical development.
The ability to perform whole transcriptome and highly multiplex profiling from FFPE (Formalin-Fixed Paraffin-Embedded) tissues has the potential to transform our ability to identify and validate targets and biomarkers from human donor tissues with disease pathologies. We now have the ability to get these tissues from other locations and not have to perform our profiling and the actual experiments in the same location. That’s allowing us to use these human donor tissues to identify targets and biomarkers.
Rebecca Mathew: There’s also a tremendous impact that these types of technologies have on us being able to understand the interplay between different cell types in intact tissue. We propose or hypothesise in sophisticated tissues, like the brain, where there’s a great degree of cellular heterogeneity, that different cell types may interact and contribute to disease progression or pathology in that microenvironment. Spatial transcriptomics allows us to see that and test our hypotheses rather than speculate.
Mathew Chamberlain: If cells couldn’t communicate with each other inside tissue, you wouldn’t have a lot of diseases, but you’d have a lot of other problems. With many of these neuroimmunology and immunological diseases, we’re moving past the idea that there’s one gene that can causally drive one disease. As a result, you have to think about diseases as systems with components that interact. Spatial data is very helpful for identifying different nearby immune cells suggesting that they interact with other cells. Some of the earlier data didn’t quite have the resolution we needed for that analysis. The more recent spatial data seems to have much better resolution. We’re getting closer.
FLG: Thank you so much. This is a very data-rich field of research. We’re all aware of the big data problems, but in terms of implementing these technologies in a pharmaceutical setting, can you give us a flavour of some of the challenges that pharma companies have with accessing and analysing tissue samples, data, etc.?
Alex Tamburino: The biggest challenge for everyone, not just pharma companies but everyone in the field, is the complexity of these experiments. Spatial transcriptomics experiments and analyses are built on many years of advances in molecular biology, next-generation sequencing, high-dimensional data analysis, microscopy and image analysis. Those are all fields unto themselves. Spatial transcriptomics experiments require expertise in all these domains, and researchers must be able to extract all the relevant information from these multi-omics and multi-level analyses. Having a team of experts who can work together and have fluency in the different domains and have the expertise in their own specific domain is essential to implement these technologies and utilise them to their full potential.
Mathew Chamberlain: There are many technical and computational challenges associated with a new data type and new technology – as it’s all very new. Defining computational workflows with bulk sequencing microarray have been around for 10-15, even 20 years. However, in single-cell, best practices are still being established. Alex made a great point that sometimes with single-cell, it’s unclear how to organise teams around it. Can you embed one scientist who’s the lead inside a room with a biologist, and that will be sufficient? Or is that kind of dangerous? Do you want to have a centralised group of computer scientists who can work together? Organizationally, I’ve seen it done in different ways and it’s fun to hear different opinions about that.
Rebecca Mathew: I would also just add that there are challenges in for-profit research with the acquisition of high-quality human tissue samples for these types of studies. That’s something that is unique to the private sector compared to academic institutions applications of these technologies. There is also such a large amount of data that comes out of these individual experiments. Reaching a stopping point where you’re ready to interpret the data and stop refining the analysis pipelines can be a challenge as well. There’s often a desire to do both; with these large datasets, you want to be able to come back to interpretation, especially on the biologist’s side, that’s what we’re eager to see; interpretation from these experiments. That can be a challenge as well.
FLG: I mean, it is such a fast-emerging field, and it is almost like a minefield to keep up to date with everything that’s going on. What developments in the single-cell and spatial fields do you think would transform the way that you’re currently doing your research?
Rebecca Mathew: My ideal scenario for what I would love to see, if the development in technology gets there, would be multi-omics integrated capabilities, where we can start to couple transcriptomics and proteomics and epigenetic signatures, to get a more holistic picture of what’s happening in an intact tissue at the single cell level.
Alex Tamburino: I agree. We’re excited about the imaging-based spatial transcriptomics technologies, which enable us to get to that single-cell resolution as a complement to the whole transcriptome. The biggest challenge with single-cell imaging-based technologies is defining a panel of 500 to 1000 genes ahead of time. That’s a bit of a trade-off as we need to know what we’re looking for. We’re set in stone, so to speak, for that experiment once you’ve designed that panel. We’re hoping to see additional technological advances by expert labs and companies so we can get two to three times the size of those gene panels, which would be transformative. We could look at a lot more genes and a lot more cell types in the same experiment.
The second transformation would be increasing the experimental throughput. These experiments take images across many fields of view for each sample and go through many rounds of chemistry and imaging for each sample. That takes time. I’m hoping the leading companies in the field will be able to reduce the cycle times and increase the throughput, like we’ve seen for next-generation sequencing technologies over the last decade. Trying to increase the number of genes on each panel and increasing the throughput will be transformative for users like us.
Mathew Chamberlain: I think I could probably talk about this for hours, but I’ll just hit the highlights. Increasing the throughput will decrease the cost of sequencing dramatically, which would be great. Everyone would love to do a Framingham Heart Study of single-cell sequencing where we single-cell sequence like a whole village for 30 years. Some kind of total SEQ package where people get sequenced with multiple organ systems like we’re seeing out of some labs with different Atlas projects is very helpful.
The dream world would be like multi-omics, deeper sequencing. But being a little more grounded, and where we’re at now, there’s still a huge amount more bulk sequencing data than single-cell data. Moving more in the single cell direction in research and in the clinic. Often, we’re still dealing with bulk data and clinical studies, so anything we can do to get some validation that single-cell studies can enrich for the odds of clinical success or help us make clinically actionable decisions would be transformative.
FLG: Speaking about clinical studies, can you speak at all about the kind of challenges of clinical implementation of these technologies?
Alex Tamburino: Generally speaking, clinical implementation for single-cell is challenging as you are collecting samples at one location and then profiling them and then doing the experiments at another location. Maintaining sample quality and preserving that sample for longer periods of time and across sites has been challenging. There’s been a lot of advances in order to improve that, but it’s always going to be compared to the quality you can get doing profiling in the same lab.
There have been some advances in that space from various companies, from either a technology standpoint or chemistry standpoint: Enabling a fixed RNA profile, and higher quality sample preservation. We’ll see how those technologies will transform the field in the single-cell space. From a spatial transcriptomics standpoint, a lot of companies have enabled users to profile FFPE tissue samples. It’s a lot easier to imagine how that’ll be integrated into clinical workflows where we’re already collecting, fixing and embedding tissues. Having access to FFPE tissues for very important studies enables transfer of those tissues to a site where we have the expert teams to execute profiling experiments, which is transformative. I’m eager to see how the field uses that.
Rebecca Mathew: I would add that the FFPE translatability of these technologies and being able to implement fixed RNA-based profiling methods will make clinical implementation and characterization much more straightforward. The feasibility of asking physicians at the time of resection to take a tissue and bank it in the embeddeding materials that were traditionally required for earlier generations of these technologies, spatial technologies in particular, has been an issue in terms of consistency or uniformity, with sample banking.
Mathew Chamberlain: 20 years ago, if we try and gauge where single-cell was going, Illumina and other companies at the time sold devices and each individual lab was going to buy one of these devices and then purchase reagents. Over time, a lot of people started doing it. Now, whole institutions have set up sequencing cores; there are lots of different sequencing cores in the Cambridge area and you can drop off samples at many different sites in Cambridge or Boston. Right now, the business model that a lot of the single cell companies have is to sell each individual company these individual devices which each have their own reagents. However, I think over time, people are going to develop single-cell sequencing cores and a move to a more centralised model should enable a little cost friendlier single-cell world.
FLG: Thank you. Everybody here has a different research area and what they’re interested in using these technologies for. What recent developments have you seen that have personally interested you?
Rebecca Mathew: I would highlight the increased resolution in the spatial transcriptomics space as a significant achievement. The ability to implement this, especially with increased resolution, and apply it in an unbiased manner, similar to what Alex was highlighting earlier with single-cell resolution. Also coupling that to measurements like IF (immunofluorescence) so we can look at protein expression, and the coupling of multi-omics technologies as well. That’s an exciting area of development.
Alex Tamburino: I would also just add that, going back five plus years, we’ve seen a lot of these extremely high potential technologies developed in these leading academic labs. Now, we’re getting to a point where the commercial providers and companies have taken these powerful technologies, packaged them into platforms, and made them very robust for multiple end-users across the world. From a company’s perspective, this is very valuable to us as we can now have an instrument and the kits on-site. We can leverage established protocols from labs and companies so we can focus on the implementation and execution of these technologies in order to answer highly complex questions about disease biology, uncover novel targets, validate targets, and biomarkers. Just being at this point in time in the development of the field is very exciting for us.
Mathew Chamberlain: Great question. What excites me the most is the discoveries that are translating right into the clinic. We’re seeing a lot of impact in oncology, where you can compare and contrast, and have the ability to subtype a patient by a lot of different things like mutations e.g., HER2 positive, different tumour subtypes: There are many different ways to measure a tumour and subtype of a patient. It’s remarkable. Then you contrast that with other areas like MS (multiple sclerosis), Alzheimer’s disease, rheumatoid arthritis or lupus, where we’ve tried to molecularly stratify patients so that we can treat them with more precise medicines. Up until this point, we didn’t have the tools to be able to do it.
Now what we see in single-cell studies are these very large patient cohorts, around 100 or so patients, where we can see which ones respond to treatments, which ones don’t, and you can look for molecular reasons that those patients may or may not respond. We’re starting to get into more of the precision medicine category that could enable broad and transformative approvals and indications that we’ve previously not been able to succeed because we were treating a heterogeneous group of patients, and we didn’t know it.
FLG: Amazing. Thank you. One question is about the increased adoption of these technologies. Since you’ve been doing this kind of research? What hints or tips or advice would you give to somebody who is looking to start to use these technologies in their research?
Rebecca Mathew: I would highlight from the biologist’s perspective that there’s a significant learning curve in terms of planning your sample cohorts, performing a careful assessment of tissue quality, and tissue banking procedures. Especially when you’re applying this to in vivo models, where you’re attempting to do longitudinal studies. Any issues that you may have with tissue preservation or banking could impact your entire study and I would also say that there is a significant level of histology expertise that’s required for implementing these technologies as well. Being intimately familiar with, for example in neuroscience, brain architecture, hitting the same sub-region of the brain for a given study reproducibly, across all of your samples, can be quite a challenge.
Alex Tamburino: I’ll just piggyback off that, and say, these experiments are very complex, there’s a lot of power and a lot of potential. But the complexity should not be overlooked. When you’re first getting started, there are so many different factors, like Becky mentioned, and others that we’ve learned along the way that impacts your ability to answer the questions you set out for. My biggest advice to anybody getting started in this field is to work with experts whenever possible. Start small and crawl, walk, run. You’ll learn a lot as you get going. You’ll make every experiment thereafter better and better.
Mathew Chamberlain: Computationally, it makes a lot of sense to take a look at the field and align with some of the major software packages that are out there. There’s this wonderful open-source community of thousands of developers that are contributing computationally to the open-source space to computational pipelines. If you just adopt the pipelines that people are writing right now, it’s like hiring hundreds of developers for free. It’s such a beautiful example of open-source science. I wouldn’t get intimidated. There’s a strong and wonderful community of scientists out there that try to make this seemingly complex technology and data just very accessible. Dive in.
FLG: As you’re all speaking at the Tri-Omics Summit next month, I thought I’d ask what do you hope to achieve by meeting with other people in this space?
Rebecca Mathew: I’d say that I expect this to be a fantastic opportunity to broaden networks, make new connections and also learn about emerging technologies that are in the very early space that we may not hear about in the published space for years. It’s a nice opportunity to learn more about emerging technology in advance and to have a chance to begin collaborating with individuals much earlier on than we would if we purely relied on what we see in publications.
Alex Tamburino: I agree with Becky. I’m excited to be networking and meeting with a lot of other like-minded and diverse individuals in the field who are thinking about spatial for different reasons and coming from different labs or companies. There’s opportunities to meet people and maybe start some collaborations or just learn about the latest and greatest technologies from different providers.
Mathew Chamberlain: I’m excited about the Tri-omics Summit – it’s probably the meeting I’m most excited about this year. I’ve gone to a few different meetings, and they’re great, but there’s a big cost barrier for academic labs to get there. Coming over to “new” Cambridge, it’s going to be a fun meeting – we have a very amazing talent pool of academics and scientists here in Cambridge. I know lots of them. I’m hoping to see lots of familiar faces, work with people and meet some new friends. I especially like the no-cost barrier aspect of it. I feel like everyone who’s there is in it for the right reasons, so I am very excited.