Dr. Eric Vail is Director of Molecular Pathology, Cedars-Sinai Medical Center,
Los Angeles, California
Can you please introduce yourself and your laboratory?
I'm the director of the Cedar Sinai Medical Center Molecular Pathology Laboratory, which is a clinical molecular pathology laboratory. We perform both solid and hematological tumor molecular analysis and employ number of techniques including next generation sequencing (NGS).
We receive approximately 2200 solid tumor samples a year and about 1000 to 1200 heme samples a year. The biggest volume from solid tumor samples is lung cancer, around 500 a year, the rest is a mixture of colorectal, breast, pancreaticobiliary, primary CNS tumors and others.
Now let’s go to fusions, particularly in solid tumors. Which are today clinically actionable?
In lung cancer it’s the ALK, RET, ROS, NTRK, and MET exon skipping mutation. The reason I call out MET is that it's significantly better to detect from RNA.
Then of course RET as well as NTRK has pan cancer indication, so if you ever see any of these fusions across the board in any tumor type, you can treat them. Even ALK fusions you can find in subset of gliomas, and there are case reports that those patients have responded to ALK targeted therapy. I believe in the future we will see more of these pan cancer biomarkers and therapy approvals. There are also FGFR fusions in cholangiocarcinoma and in urothelial carcinoma.
So, there is a wide array of fusions in human malignancy and in our laboratory, ~7-10% solid tumor samples have a fusion variant, which corresponds to the data in pan-cancer databases and the literature.
So how do you perform fusion detection in the routine samples?
We are using 500 gene amplicon based NGS as first line testing method, and in some cases, we confirm with FISH or anchored multiplex PCR based NGS.
Why have you chosen the amplicon based NGS for first line testing? What are the advantages?
This technology is very robust. It requires in an order of magnitude less material than the other NGS methods, like large hybrid capture based NGS panels or anchored multiplex PCR based NGS. We routinely run samples with 10 ng of nucleic acid (NA) input and have even had success with less than 5ng.
This is very important for us, as real-life tumor samples are very often (>20% of the time) very low quality or quantity. Not being able to test those, we would exclude large numbers of samples and ultimately patients from the targeted therapy option.
There are other reasons as well – the complete and highly automated workflow.
Majority of the cost any pathology, even in molecular pathology laboratory is labor and our workflow helps us to utilize it more efficiently. This has enabled us to scale up our operations to current level and it can enable new laboratories without any experience to start much easier than ever.
Yes, so the disadvantage of amplicon based NGS is that it cannot detect all fusions including novel fusions and partners. Which some other technologies can do, but as I said, they require large amount of material for testing, which is not often available. In an ideal world, if and if every sample was fresh frozen, perfect quality and large size, it would not matter.
But in a real life we have mostly FFPE tissue samples, many of which these methods would not be able to test at all. So, it’s a tradeoff decision at the end.
Can you elaborate a little on that?
You get extreme sensitivity and specificity with amplicon-based chemistry because you're targeting both partners and can amplify them from very little original content.
We call easily 200 or 100 reads for intergenic fusions even in a low cellularity sample, a bit higher for few intragenic fusions where the noise is little higher. And the specificity is very high as well, I do not remember a false positive intergenic fusion in the years we've been using this technology.
One of the disadvantages of the hybrid capture based panels, especially DNA only ones, is that you have to call intronic rearrangements and then try to predict that it's in frame. Even if their specificity is very high for rare events, especially pan-tumor, the positive predictive value is not that great. There have been publications looking at NTRK and RET and the positive predictive value (PPV) is only like 60-70%, even though the specificity is 99.7% because PPV scales inversely with incidence in a tested population. Our PPV for fusion calls is 100% across 5-6 thousand of cases.
But, given the design of the amplicon-based assays, they cannot detect what is not targeted. Some of the sarcoma genes which are very promiscuous are not well covered, as well as fusions important in pediatric cancers. That’s because it’s hard to design targeted panel for some of these very low frequency tumors and rearrangements, and we know that, so in such cases would go to another technology such as anchored multiplex PCR based NGS or FISH.
Also, they [amplicon-based assays] cannot detect novel fusions. There is rare but meaningful amount of these and as we expand the knowledge by sequencing more, there will be more of those, and they are not covered by the targeted design. So, we don’t detect them.
How many do you think you miss?
It’s very difficult to establish what the true miss rate is from all of the routine cases, as we would have to always run different technology in parallel – dual sequence, to establish that.
But we have all of our validation data, where we used other technologies, even whole transcriptome sequencing. And also, we have cases that were sent out to different laboratories.
Also, we have been running ALK, ROS and RET FISH on more than 1000 lung cancer samples in parallel, and we have discovered just one novel ROS fusion partner in all of those cases.
Adding all these data together, we believe the miss rate is less than 1% of all solid tumor cases, and as the targeted panels are very well designed for lung, it’s even less, about 0.1% for lung cancer samples. Is it important for that patient? Yes, absolutely.
But, if, to be able to detect these rare novel fusions, we use a different technology that requires high NA input, we might lose 20% of patients, or more. So that’s the tradeoff. All the technologies have some tradeoffs.
We need to take in the account the day-to-day reality, and that is, in our case, that around 20-30% of our samples don’t have the optimum input for hybrid capture based technology for example. Optimum input is usually somewhere around 100 ng, by the way, not 40, which is Q&S, which is about 1 in 8. So, talking about fusions, we would anyway miss about 1% of those just due to QNS. Of course, normally we get more RNA than DNA because there's more RNA in each cell than there is DNA and so the DNA failure/miss rate is actually even higher than that.
While only around 5% are suboptimal for amplicon-based assays. And almost none are true QNS and as discussed, we find true positives in samples that have less than 5ng. And that is very meaningful, especially for EBUS biopsies or FNA samples in lung cancer - these are very limited, small samples.
That’s the reason why some big reference labs have 15% QNS rates, whilst our QNS rate is 1.5-2%, so again, it’s in order of magnitude. So that is how we make our decision regarding this trade off.
What about LOD?
LOD is a tough thing to measure for fusions. For like SNVs or indels you have Variant Allele Frequency (VAF)to make a comparison. Knowing the analytical sensitivity of your test and tumor cellularity of the sample you can adjust the sample quantity that need to be loaded.
For the RNA, we've reliably input 10ng and found it yields for both the DNA and the RNA reproducible results. As we get lower than that, it goes linearly down, until you until you approach zero.
We have gotten results below 5ng. Of course, the sensitivity at that level is not so good. But what's nice about this technology is that when the library fails you don’t get a sequence and call potentially a false negative result. You just don't get any sequence at all, call QNS, and have to ask for another sample. And that's a very comforting thing from a clinical perspective.
So how do you detect novel Fusions?
Our panels have the fusion imbalance call (i.e., 5’/3’ imbalance) for ALK, RET, NTRKs and FGFR2. Its looking for over-expression of the fusion partner portion of the gene and under expression of the portion of the gene that's lost after the fusion event. It makes sense biologically, as well as scientifically and it works in the sense that the sensitivity is pretty high.
If we see a targeted ALK fusion, we can almost always see a call on the 5’/3’ expression imbalance. However, the specificity is pretty poor for a clinical assay to use as a standalone. So, we always confirm with a secondary assay after that. We send out for confirmation with anchored multiplex PCR NGS, but that often takes 2-3 weeks to come back, so we run FISH as well and call it positive if confirmed, and when it comes, we amend the report with the partner genes, and it has been very reliable. But out of many dozens of calls, we've confirmed 3 or 4. So the positive predictive value is around 10-20%, in the very small population where its clinically relevant, which we already talked about.
So, in my opinion it is a screening tool. If it's positive, do a confirmatory assay afterwards, don’t report it on its own as it might well be negative, and you don’t want the clinician to run with this information.
But again, the sensitivity is relatively high, although impacted by tumor cellularity much more so than the targeted fusion calls. So, where the targeted fusion call sensitivity is in about 5% cellularity range, for the expression imbalance, it's probably around 30%. Which makes sense because its expression based.
Also, we are thinking about starting a reflex program for true driver negative NSCLC samples. We would reflex to the other technologies - anchored multiplex PCR based NGS. There are billing codes to support an RNA only testing workflow as well - kind of reflex codes that you add on after doing the traditional sequencing.
What would be your advice to a lab who is choosing technology to detect fusions?
It will of course depend on the volumes they are expecting, technical staff that they have, the needs of their community etc. I am all for democratization of NGS technology, so if you are a 400-bed community, get 3-4 lung cancers a week and a colorectal or two, you should absolutely adopt it, internalize this testing with something like a highly automated 50 gene panel which will cover most of actionable mutations. With this you can get better turnaround time and service to the oncologists.
If you're one of the big academic institutions that has been sending out to one of the big reference laboratories, and you've decided finally, you want to internalize this and bring this in house, you will probably look at bigger panels. You can do custom as well but need to have sufficient bioinformatics resource. For us, even though we have one bioinformatician, the kitted solutions work much better.
What do you think about centralization vs democratization of the genomic profiling?
Let’s remember, that 95% of patients in US are seen in the community, only 5% are seen at big academic centers. Centralization of testing is in my opinion one of the root causes of the very low uptake and utilization of genomic testing.
Only 40-50% of patients are tested for all of the NCCN biomarkers which is atrocious. And only 40 or 50% of those tested, are getting the targeted therapy associated with their biomarkers. We need precision oncology experts in the local laboratories also because this field is moving very fast. It’s going to be challenging for the oncologists, who also have to do all of the traditional oncology, not just precision medicine, to keep up with all.
And so, the role of the laboratories will be more and more to be the expert consultants and provide assistance to the oncologists. My oncologists have my cell phone number, they call me often and want to discuss the case. That’s a level of service the reference laboratories unfortunately cannot achieve.
Those are the reasons I passionately believe in genomic testing decentralization. And as today, there's pretty much a solution for every type of laboratory, it should be possible.
Going back to the fusions to close: can you say one technology is better than the other?
There are trade-offs. Every lab director should look at input requirements, sensitivity, specificity, ability to detect novel fusions, all in context of the available workflow automation, local technical expertise, and labor costs.
For my laboratory and our patient population, we have chosen a technology which we believe enables us to test more patients and detect more actionable variants, including fusions. The advantages we discussed outweigh the disadvantages which we are aware of and have secondary strategies in place to offset those.
Watch Eric Vails talk at OncomineWorld 2023 to learn more about the Clinical Utility of Amplicon-Based Comprehensive Genomic Profiling (CGP).
Thermo Fisher Scientific has announced the recipients of the latest Oncomine Clinical Research Grant. This grant program supports investigator-initiated studies in oncology, with a focus on molecular profiling and the democratization of precision...
A recently published article by the BLOODPAC Consortium details the potential of liquid biopsy in the management of cancer and highlights the barriers to adoption, particularly in underserved populations.
Cancer is one of the leading causes of...
Precision medicine is on the rise. However, these groundbreaking, targeted therapies for hard-to-treat diseases like cancer can only help improve patient outcomes if rapid genomic testing is accessible and can quickly inform care decisions.