Skip to main content
Premium Trial:

Request an Annual Quote

Nonprofit's Efforts With Payors to Assess NGS Labs' Variant Interpretations Spark Debate

Premium

This is the second story in a two-part series about the Center for Genomic Interpretation's efforts to help payors better understand the quality of next-generation sequencing tests. Read Part 1 here.

NEW YORK – Labs will push back, industry insiders suspect, if more insurers like Highmark start asking them to meet credentialing requirements for their next-generation sequencing testing services that go beyond what they have to do under current regulations.

Highmark sent letters in August to several in-network labs asking for validation data on their germline and somatic NGS cancer tests beyond what is required for certification under the Clinical Laboratory Improvement Amendments (CLIA) or accreditation through the College of American Pathologists (CAP). Since then, there has already been some "light pushback," acknowledged Matt Fickie, senior medical director at the Blue Cross Blue Shield-affiliated insurer.

In the letter, Highmark suggested labs conduct this additional validation through a third-party group and recommended they use a nonprofit, called Center for Genomic Interpretation, though Fickie has said the insurer is open to other ideas from labs. CGI conducts in silico-based assessments to determine a lab's ability to correctly detect, name, and interpret variants within NGS testing services. The organization has been running pilots with payors for the past year, according to CGI CEO and Cofounder Julie Eggington, to provide insight into areas of test quality not addressed by federal regulatory standards under CLIA, which all US clinical labs must abide.

"Regulations from 1988 aren't helping anybody," Eggington said of the amendments Congress made to lab testing regulations more than 30 years ago. In order to be certified under CLIA, which is administered by the Centers for Medicare & Medicaid Services, high-complexity labs like those that conduct NGS tests must pass a survey of lab systems and processes every two years conducted by a CMS-approved state or private organization, such as CAP; personnel overseeing tests must meet certain qualifications; and labs must participate in proficiency testing (PT) programs to ensure test accuracy.

The aim of PT, however, is to determine whether genetic tests are analytically valid — that they can detect variants present in a patient's sample. But they do not establish whether tests are clinically valid — that they can, for example, correctly interpret that a detected variant increases a patient's risk for cancer or their chances of responding to a drug. However, Highmark is interested in both the analytical and clinical validity of the NGS cancer tests offered by its in-network labs and is recommending they validate their tests through CGI's in silico-based quality assessment program, which evaluates labs' ability to detect and interpret variants (see Part 1 for more details on CGI's quality assessment method).

Despite some objections to the new credentialing requirement, Highmark is sticking with it and plans to expand the requirement to include different types of NGS tests. Fickie predicted that other insurers will follow Highmark's example with similar requirements.

Such a scenario may not sit well with genetic testing labs. Invitae, for example, met with Highmark after receiving its letter and discouraged the insurer from contributing to an environment where labs have to fulfill a different set of expectations for each payor. "We understand that payors and providers want assurance of quality, but 'quality' assessment can be subjective, particularly when one gets beyond analytical validity," said Invitae Chief Medical Officer Robert Nussbaum. "We expressed our concerns about the proliferation of gatekeepers, each with their own definition of quality, which will multiply the effort we have to make, uncompensated, to satisfy the requirements of one payor after another."

Nussbaum is aware of at least one other large commercial insurer contemplating an NGS quality assessment program with requirements different from Highmark's, and if more payors start doing the same, it can make it expensive and difficult for labs to comply.

UnitedHealthcare said it is working on assessing genetic test quality with help from Optum Genomics and "in discussion with" CGI. Optum Genomics (under UnitedHealth Group's Optum subsidiary) reviews analytical and clinical validity data on molecular diagnostics.

"The cost of satisfying all of these self-nominated custodians of quality will result in increased cost to the healthcare system, and ultimately, to patients," Nussbaum said.

Since Highmark seems particularly keen on getting a read on the quality of labs' variant interpretations, Nussbaum highlighted that Invitae has deposited nearly a million variants into ClinVar, a National Institutes of Health-funded public archive of genetic variants and their relationships to disease. Invitae regularly collaborates with other labs to try to resolve variant discrepancies in the public database and has built a robust variant classification operation by hiring experts in variant analysis and investing in bioinformatics capabilities. Additionally, Nussbaum said Invitae is one of the few US labs to meet the international standard known as ISO.

"I do not believe that a lab that meets the quality standards we do and shares our interpretations with ClinVar will benefit particularly from any one guardian of quality," Nussbaum said, adding that during their discussions, Highmark seemed to understand the firm's position and made clear that the new requirements aren't aimed at labs like Invitae with extensive quality management systems in place. "They see this as a way to weed out many really bad actors."

Precision Oncology News asked around a dozen prominent labs marketing NGS somatic or germline cancer panels if they received Highmark's letter. Some didn't respond. Representatives from labs that did respond, either said they didn't get Highmark's letter or declined comment.

Amid regulatory gaps

Although NGS tests have quickly proliferated on the market in recent years, the regulatory system hasn't evolved in step to ensure labs are providing accurate results to patients, in Eggington's view. "I would love the FDA to fully regulate this industry," she said. "It's very much needed."

Most genetic tests on the market are developed and performed within a single lab, and currently overseen under CLIA, as opposed to distributed test kits that can be performed in multiple labs and regulated by the US Food and Drug Administration as devices. Seeing the growth in the number and complexity of genetic tests, the FDA has tried many times to lift its longstanding policy of enforcement discretion over lab-developed tests (LDTs), only to be thwarted by the lab industry and pathologist groups, who argue that the agency lacks statutory authority to regulate such tests.

A bill, called the VALID Act, would squelch this protracted disagreement by giving the FDA statutory authority over all diagnostics, including LDTs, but lawmakers recently decided against integrating it into another must-pass bill. Absent a legislative solution, the FDA has attempted to prod labs to submit their companion diagnostics — tests required to determine patients eligibility for specific drugs — for review, and while a few labs, like Foundation Medicine, have garnered FDA approval, there are plenty of LDTs on the market without it that are being used to make treatment decisions.

In this uncertain regulatory environment, Fickie sees an opportunity for insurers to act as "enforcers" of NGS test quality. But by asking to see the quality of labs' variant interpretations, Highmark is stepping into an unsettled area of debate in the genetics field about whether labs' ability to determine the clinical significance of variants — which involve both bioinformatics software and the expertise of Ph.D. or M.D. scientists — can be regulated presently by the FDA or under CLIA.

Many genetics experts insist that the involvement of M.D. or Ph.D. scientists makes variant classification akin to the practice of medicine, over which the FDA and CLIA hold no sway. "Just like they don't tell a surgeon whether or not they've taken out a gall bladder correctly, they don't tell a physician whether or not they've done a [variant] interpretation correctly," said Neal Lindeman, a pathologist at Mass General Brigham and vice chair of the CAP molecular oncology committee. 

Alberto Gutierrez, former director of the FDA's Office of In Vitro Diagnostics and Radiological Health, challenged the claim that variant classification is the practice of medicine. "Although CLIA is supposed to ensure the qualifications of professionals employed by the lab, as far as I know, it doesn't presently ensure that lab directors have the requisite bioinformatics expertise," he said. "It's hard to make the argument then that their variant interpretations constitute the practice of medicine."

The CLIA Advisory Committee (CLIAC), which since 1992 has advised CMS on how CLIA regulations may need to be revised to ensure the quality of tests, has recognized this deficiency and is creating a workgroup that will propose qualifications for lab personnel performing NGS bioinformatics data analysis and interpretation.

When it comes to the bioinformatics portion, or dry lab component of NGS testing services, the FDA seems to think it has oversight authority if the software is used in therapy selection. Last week, in a final guidance on clinical decision support (CDS) software, the agency provided examples of device software functions that it intends to oversee. One such example is software that contains electronic files of patients' variants identified via an NGS analyzer and provides recommendations for FDA-approved treatment options. 

To Jennifer Wagner, a lawyer and expert in genetic testing regulations, the agency's actions run counter to Congress' intent in the 21st Century Cures Act to exclude certain CDS from the FDA's device regulation, for example, if healthcare providers can independently review the recommendations presented by the software. Given the examples in the guidance, including the one related to NGS bioinformatics software, it "really seems like the FDA is trying to claw back every kind of software into its purview," said Wagner, who expressed disappointment over the fact that the agency seems to have finalized its thinking, largely ignoring stakeholders' comments to the 2019 draft guidance. 

Furthermore, with the NGS bioinformatics software example, the FDA appears to be asserting it has oversight over the dry lab components of NGS testing services, even while its statutory authority over LDTs is in question and the wet lab components would still be regulated under CLIA. "It definitely feels like the FDA is making a big grab," said Wagner. She expects labs will object but isn't sure if this will have much impact on the FDA's stance, since the agency has chosen to regulate via guidance, which isn't legally binding like a regulation or rule but can still be wielded by the agency to prod labs to align with its position. When the FDA wanted to bring lab-developed companion tests under its oversight, for example, it issued a guidance.

Gutierrez, who now consults labs on regulatory matters at NDA Partners, noted that although the FDA presently exercises enforcement discretion over lab tests, for certain categories of tests that it does review, such as companion diagnostics, the agency looks at the lab's entire testing process, including interpretations of representative variant types. The aim of FDA review is to provide assurance that the lab's entire NGS testing pipeline is "likely to provide a truthful result," Gutierrez said, adding that NGS quality assessment programs like the one offered by CGI can help provide such insights.

The FDA has also advanced other programs to try to improve the accuracy of NGS tests. For example, while Gutierrez was leading OIVD, the agency hosted in silico variant detection challenges within its PrecisionFDA program to help labs improve the analytical validity of NGS tests. The agency has recognized a subset of expertly curated germline variants in ClinVar and a portion of somatic variants in Memorial Sloan Kettering's OncoKB database, so labs can use them to demonstrate the clinical validity of their tests.

But as long as the agency practices enforcement discretion over LDTs, all this remains voluntary when it comes to most marketed genetic tests. And while CLIA stipulates that all clinical labs must demonstrate their test's analytical accuracy and fix any test quality issues identified by inspectors or in proficiency testing (PT), regulatory experts said these frameworks are intended to spur labs to self-correct and the problems are rarely publicized.

Horses and zebras

Regardless of where the FDA lands in the LDT regulation debate, Eggington's goal within CGI is to use a PT-like framework to help insurers assess whether the NGS tests they are covering can detect and interpret the types of variants labs say they can. "Even if FDA … doesn't step up to the plate, if we can just work with the payors and stop the flow of money to inaccurate, clinically invalid tests, they'll have no choice but to survive in this industry by providing accurate and clinically valid tests," she said.

Although under CLIA regulations, moderate and high-complexity labs must demonstrate the analytical accuracy of their tests through external PT programs, these assessments are too easy, in Eggington's view, and don't test labs on rare or hard-to-detect variants.

CAP is the biggest provider of PT for high-complexity labs; 72 percent of the organization's operating revenue last year was attributable to its PT offerings and pathologist quality registry. Within its PT surveys administered several times a year, CAP asks labs to test blinded samples from patients or engineered from cell lines, or even test in silico samples. Labs' findings are compared to a truth set or to the mean value from all the labs participating in the challenge.

CAP offers PT programs for the most common NGS applications in cancer, including tests for solid tumors and hematologic malignancies, and cell-free DNA panels. There aren't CAP PT programs for more infrequently performed tests, such as epigenomics or whole-exome sequencing panels, according to Lindeman, and labs with tests that have "niche" applications, may also not have a PT survey.

Under current regulations, NGS labs can use a commercial PT product to meet CLIA requirements if one is available, but they can also demonstrate analytical accuracy via an alternative method, for example, by blinding and testing split samples available in-house or by swapping samples with other labs performing similar tests. 

However, CLIAC's NGS workgroup recognized in a 2019 report that commercial PT programs and even alternative methods, like sample splitting, may not be testing the limits of marketed NGS tests. The workgroup, which includes members working in commercial, academic, and public health labs, acknowledged in the report that "commercial PT programs are a little bit too easy," and should be encouraged "to be a little edgier."

"Every laboratory and laboratory director is scared of failing a PT, which means that laboratories want to pick the easy cases for PT," the workgroup wrote. "There is no economic incentive for trying to look at more challenging cases. In fact, there is an economic disincentive as well as the fear of failing a PT."

"This is basically the industry confessing, 'Yup, we choose the easiest cases,'" for PT, said Eggington.

Lindeman disputed this criticism and said that within CAP's PT programs labs are tested on their ability to detect a "spectrum of variants," rare and common.

CMS is responsible for regulating more than 300,000 labs under CLIA, and therefore, the aim of PT is to ensure broad participation, added Tina Lockwood, associate professor of laboratory medicine and pathology at the University of Washington, Seattle, and a member of CAP's molecular oncology committee. "What laboratories need to have available to them are challenges that are reflecting the scope of practice. You can't only have the zebras," she said, referring to rare, edge cases. "You need to know the laboratory is actually doing things that come up a lot more often, too. It is always a balance."

Lockwood asserted that PT programs are expanding in step with NGS use in healthcare. CAP "is very active in the molecular oncology space, and is growing, developing, and adding more [PT products], because oncology is moving so quickly," she said.

Moreover, in response to recommendations from CLIAC, the US Centers for Disease Control and Prevention's Genetic Testing Reference Material Coordination Program (GeT-RM) is conducting a pilot project to show the feasibility of using in silico reference materials to supplement DNA samples in developing and validating NGS tests. Participating labs in the project will sequence a publicly available Genome in a Bottle DNA sample and generate a sequence file. Into that file, GeT-RM will add clinically relevant variants associated with cardiomyopathy and hereditary cancer risk drawn from a curated list of variants developed in collaboration with experts at ClinGen, an NIH-funded effort to define the genetic variants used in precision medicine. The mutagenized files will then be returned to the original labs for analysis using their bioinformatics pipeline.

It's important to establish the utility of in silico reference materials in NGS test development and validation, said Lisa Kalman, director of the GeT-RM program, "because it is not possible to find DNA samples with many of the important variants for tests that cover many genes, and it is costly to run large numbers of DNA samples during assay development and validation." 

A divisive pilot program

The structure of the GeT-RM pilot is not unlike CGI's in silico-based quality assessment program (see more details in Part 1), except GeT-RM is focusing on analytical validity and CGI is also assessing clinical validity. CGI's push to use in silico methods to consider variant interpretation quality of NGS tests seem particularly ambitious, considering the field is not of one mind when it comes to designing PT frameworks for gauging how well labs detect variants.

For example, when an expert workgroup convened by consulting company Tapestry Networks used in silico-based methods to compare the performance of NGS LDTs against Illumina's Praxis Extended Panel, the outcome was very divisive within the lab community. Researchers led by senior author John Pfeifer, a professor of pathology and immunology at Washington University School of Medicine, St. Louis, used mutagenized variant files created by his company P&V Licensing LLC (which CGI also works with), as well as engineered wet samples, to evaluate labs' abilities to detect the 56 KRAS and NRAS variants included in the Praxis test. That FDA-approved test, which is used to determine the eligibility of metastatic colorectal cancer patients for Amgen's anti-EGFR drug Vectibix (panitumumab), has a published limit of detection (LOD) of 5 percent variant allele frequency (VAF).

Only 10 out of 19 labs in the pilot detected the variants in this pilot with an accuracy similar to the Praxis test, and the authors noted a higher rate of false negatives for some single-nucleotide variants and harder-to-detect multi-nucleotide variants. "Variable accuracy in detection of genetic variants among some LDTs may identify different patient populations for targeted therapy," Pfeifer and coauthors concluded in the American Journal of Clinical Pathology.

Highmark's Fickie read the paper and found it "technically astute." Even though the analysis is challenging to parse even for someone like him with a clinical genetics background, "it's a strong indictment" of the quality of some marketed LDTs used for therapy selection, he said.

Others have found the paper more objectionable. CAP, which was initially involved in implementing the pilot, rescinded its involvement when it saw the way Pfeifer's group interpreted the data — "a pretty dramatic step," said Lindeman, who felt that this pilot set up labs to fail knowing the standard performance and drawbacks of NGS tests at the time.

At the time of the challenge, between December 2018 to March 2019, NGS bioinformatics pipelines weren't optimized to identify multi-nucleotide variants, Lindeman said. Moreover, when he and other colleagues sifted through cancer genomics datasets in cBioPortal and the American Association for Cancer Research's Project Genie earlier this year, they didn't find any multi-nucleotide KRAS or NRAS variants across 120,000 cancer cases. Therefore, a lab's inability to detect such variants would be unlikely to impact patient care, Lindeman said. He further pointed out that many labs' NGS tests at the time had an LOD at 10 percent VAF.

Pfeifer and colleagues, meanwhile, noted in their paper that "participating laboratories [in the pilot] self-reported VAFs from 2 percent to 5 percent for single nucleotide variants and 3 percent to 10 percent for insertions/deletions as the LOD values for their LDT NGS assays." As such, the authors said they judged labs based on their claimed LOD and not for failing to detect variants present in samples below that threshold. Moreover, the FDA-approved package insert for the Praxis CDx features a list of 56 KRAS and NRAS variants, of which 11, or 20 percent, are multi-nucleotide variants.

These are variants Illumina claims its test can detect and the FDA has approved the test with these claims based on the submitted evidence. Lindeman pointed out, however, that there is test performance data in the package insert on just one KRAS di-nucleotide variant that the firm tested in a sample. The performance of other multi-nucleotide variants is deemed "not estimable," he added, likely because there were no samples to test.

When Tapestry discussed the implications of this pilot with its steering committee and other stakeholders, some shared Lindeman's view that it wouldn't have a palpable impact on patient care if some of the LDTs in the study couldn't detect these rare, hard-to-detect variants. Others pointed out that even if a variant is exceedingly rare in the population, it's important to the one patient in a million who has it that labs can detect it. Moreover, if there's an FDA-approved test on the market that can gauge these variants at 5 percent VAF and detect multi-nucleotide variants, in Pfeifer's opinion, this is the bar LDTs on the market without FDA approval must meet so oncologists can get reliable results.

"If nobody is asking labs to demonstrate that they can detect different variant classes at VAFs close to the limit of detection of their tests, the equivalence of LDTs to FDA-approved tests remains uncertain," he said. "The pilot suggests that this is a real issue in selecting patients for appropriate therapy, not just a theoretical one."

In contrast to the pilot program Pfeifer led, that same year Lindeman led a CAP survey of NGS cancer profiling tests, which found an overall accuracy of more than 98 percent in labs' ability to identify 10 somatic single-nucleotide variants present in samples at a VAF of 15 percent or higher.

Pfeifer pointed to the much higher VAF, easier-to-detect SNVs, and emphasis on aggregate lab performance in CAP's proficiency testing survey to argue that the current models of evaluating LDTs may not be representative of NGS testing in routine clinical practice, and that aggregate presentation of results may be masking the performance of individual LDTs.

CAP PT surveys do look at aggregate lab performance, Lindeman said, but the organization notifies each lab of performance issues that it needs to account for and fix. If a lab doesn't improve, they can lose their CAP accreditation, but Lindeman acknowledged that individual lab failures aren't publicized.

The truth about variants

The fact that the outcomes of the Pfeifer-led pilot was so divisive in the field signals that CGI's efforts to evaluate the analytical and clinical validity of NGS LDTs might face even more pushback.

While in silico-based assessments are used to interrogate the analytical accuracy of NGS tests within CAP's PT programs, lab-accrediting bodies to date have "steered clear of questions about interpretation," said Pfeifer. Eggington, he noted, has recognized that you can use in silico variants to ask: Is this an error in variant detection or an error in variant interpretation? Even if labs initially resist, he believes they'll warm to CGI, particularly "if the payors, the people with the money, say, 'No, we're not paying you, unless you do this.'"

But many experts in the field feel strongly that CGI's in silico-based method may not be the best way to judge how well labs are interpreting variants, arguing that the genetic testing field is still in its infancy and the meaning of many variants will change as knowledge accumulates. "I have a lot of respect for Julie and have spoken to her at length about her passion to improve this," said Invitae's Nussbaum. "But I think that her approach is simplistic. Variant detection is something that one can do PT on. Variant interpretation is a much more complicated and challenging problem. First of all, you have to know what the truth is."

Since the truth about most genetic variants is dynamic at present, Nussbaum believes it's better for labs to share data and work collaboratively to establish consensus classifications. "I suggested to Highmark they would get more bang for their buck by simply insisting that any lab seeking reimbursement from them put its clinical variant calls, with the evidence supporting them, into ClinVar on a regular cadence, [such as] semi-annually," Nussbaum said. "Without ClinVar submissions, you cannot get paid."

"As much as we try to make variant classification as objective as possible, there is subjectivity to it," agreed Heidi Rehm, medical director of the clinical research sequencing platform at the Broad Institute, who with Nussbaum was part of a team that in 2013 received the initial NIH funding to develop ClinVar. Nearly a decade later, the database has amassed more than 2 million records on interpreted variants from 2,300 submitters. The database contains mostly germline variants but also includes somatic variants.

Eggington and Rehm have very different views on the utility of ClinVar. Rehm sees the database as a way to identify and scale resolution of variant classification differences.

Recognizing that this type of work is labor intensive and time consuming, she and others have advanced a framework where for variants on which the majority of labs agree, only the outlier labs will have to reevaluate their interpretations. Any discordant variants that still remain will then be taken up for resolution through further collaboration. ClinVar sister effort ClinGen, focused on defining genetic variants in precision medicine, publishes a list that recognizes labs that meet certain data sharing standards, for example, if they contribute variants regularly to ClinVar and supply supporting evidence.

Some insurers have started factoring in labs' data sharing practices when inking coverage contracts. In 2016, Aetna was one of the first insurers to require labs to submit their variant interpretations to ClinVar in order to be an in-network provider of BRCA1/2 testing. A spokesperson for UnitedHealthcare said the commercial payor also looks at whether labs list details about their genetic tests in NIH's Genetic Test Registry and submit variant interpretations to databases like ClinVar.

This "provides transparency on a test's methodology, validity, and utility, and increases overall genetics knowledge and collaboration," the spokesperson said, adding that the insurer has also implemented preferred lab networks that requires labs to show they are accredited and their genetic tests meet "overall quality and performance requirements." (After Part 1 of this series was published, UHC confirmed it is also having test quality discussions with CGI.)

In Eggington's view, insurers need to look beyond preferred lab network schemes and ClinVar submissions to really understand whether the NGS tests they're paying for are providing patients with accurate results. While ClinVar was a "near perfect vision" for improving variant interpretation quality, she said its existence has created some unintended market dynamics. For example, with some insurers factoring in data sharing practices in contracting decisions, labs are incentivized to submit to ClinVar, but Eggington has found that some bad actors just copy the classifications other labs have made and submit them as their own to ClinVar without doing their own interpretation or quality control.

"This creates an echo-chamber effect, where five labs are saying that a particular variant is pathogenic, when in fact, it is benign or a variant of uncertain significance," she said. "Consensus does not mean something is biologically or clinically accurate. Consensus could mean that we've browbeaten each other enough that we've finally agreed … and this gives the illusion that this field is more accurate than it is."

For example, when researchers at BGI Genomics and several universities in China reanalyzed 217 variants in 173 genes that had nonconflicting pathogenic or likely pathogenic classifications in ClinVar, they downgraded 40 percent to a benign, likely benign, or VUS classification. Variants were more likely to be downgraded, the authors said, if they had an older classification, a higher allele frequency, and were submitted to ClinVar through methods other than clinical testing.

But the authors also acknowledged that the number of variants they downgraded may be overinflated because they relied on publicly available variants for reinterpretation, and didn't factor in labs' in-house or unpublished evidence. "This reinforces the importance of data sharing in the scientific community," the authors wrote.

When Rehm has gotten labs together to try to resolve discordant variant classifications, the differences are mostly due to out-of-date classifications, or to labs not having access to another lab's unpublished data on variants. But once labs see each other's evidence, for the most part, they agree on what the right calls are.

"Could one lab sway another? Sure," Rehm said, but she noted that it's rarely the case that one lab pushes the other to their opinion, because they established the classification first. "I guarantee the benefit of the data sharing, and building consensus with professional input, far outweighs any minor echo-chamber effect. This is similar to other medical practices where physicians routinely seek out the opinions of their colleagues to build consensus on treating a patient."

To the extent an echo-chamber effect exists in ClinVar, it can be addressed by requiring labs to submit their variant interpretation evidence, Nussbaum said, adding that variant submission alone shouldn't be enough. "[Insurers] shouldn't reimburse labs that don't also submit the evidence that they used" to classify variants submitted to ClinVar, he said, noting the vast majority of labs aren't presently being transparent about the underlying evidence.

Ready for resistance

Although there's been some resistance to Highmark's credentialing requirement, "no one is throwing things at the Highmark building or anything," quipped Fickie. He said he is open to considering a lab's variant classification and evidence submissions in ClinVar as a sign of interpretation quality. "There aren’t perfect quality markers for variant interpretation, so ClinVar is a good one," he said.

After seeing how this first year goes credentialing labs with cancer NGS tests, Highmark hopes to expand it to include more labs and different kinds of tests, such as noninvasive prenatal testing.

CAP, meanwhile, will follow up with Highmark about its credentialing requirements and is open to discussing how to improve PT with payors, said Lindeman. "We want our PT to be the best it can be," he said. "None of us are sitting around saying it can never get any better. If people have suggestions, whether it's CGI or anyone else in the community, by all means, let's discuss and make it better."

Rehm thinks, though, that labs will definitely push back if they start having to classify "variants they've never seen" for multiple insurance companies. Should it come to that, "I will be happy to engage with those labs that are frustrated with this approach and document why submission to ClinVar is a more effective approach to improving genomic interpretation in the long run," she said.

By working with insurers, CGI may be viewed as aligning with a stakeholder that is pilloried for gating access to precision medicine. What insurers consider medically necessary and are willing to pay for doesn't always align with what doctors consider evidence-based medicine. Genetic counselors, oncologists, and patients often express frustration when insurers are reluctant to pay for NGS profiling or the treatments that are recommended based on test results. "The payors get a bad rep, but … at least the ones we work with really want to get good precision medicine to their plan members," Eggington said.

Since Eggington started CGI five years ago, her efforts to improve NGS test quality has faced so much industry resistance that she would be surprised if there wasn't pushback against the nonprofit's work with payors. She finds value in engaging with critics, because it exposes CGI's blind spots and where she needs to work harder to persuade stakeholders. "I love precision medicine. I love genetics," Eggington said. "I just want patients, no matter where they get a test, to get an accurate answer."

The Scan

Cystatin C Plays Role in Immunosuppression, Cancer Immunotherapy Failure, Study Finds

A study in Cell Genomics provides insight into how glucocorticoids can lead to cancer immunotherapy failure via cystatin C production.

Aging, Species Lifespan Gene Expression Signatures Overlap

An Osaka Metropolitan University team reports in Nucleic Acids Research that transcriptional signatures of aging and maximum lifespan have similarities.

Splicing Subgroup Provides Protocols for Evaluating Splicing Variant Data

The group presents their approach on how to apply evidence codes to splicing predictions and other data in the American Journal of Human Genetics.

Single-Cell Transcriptomic Atlas of Mouse Cochlea to Aid Treatment Development

Researchers in PNAS conducted single-cell and single-nuclear sequencing of about 120,000 cells at three key timepoints in cochlear development to generate a transcriptomic atlas.