Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.

Strategies for Sequencing Challenging Samples

Collapse
X
Collapse
  •  

  • Strategies for Sequencing Challenging Samples

    Click image for larger version  Name:	Difficult Samples Image2.jpg Views:	0 Size:	254.8 KB ID:	325591


    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated from any sample, the better the data will be generated,” he noted.

    Difficult samples can originate from various sources, but Echave shared that they are often characterized by either low yields of nucleic acid, damaged DNA or RNA, or the presence of inhibitory compounds. Additionally, the difficulty of analyzing a sample can depend on the intended analysis. For example, DNA from formalin-fixed, paraffin-embedded (FFPE) tissue can be used to identify point mutations, even with minor damage, but analyzing full-length transcripts from isolated RNA is more complex. Samples that contain mixed species, such as nasal swabs, can also complicate the analysis, leading to issues in data interpretation. Fortunately for researchers, many strategies and technologies have been developed to overcome these issues.


    Preparation Strategies
    To address the challenges associated with many difficult sample types, Echave provided several strategies for the initial stages of the NGS workflow. The first critical step for efficient nucleic acid extractions is proper sample homogenization. Echave recommended utilizing mechanical shearing devices like the Omni Bead Ruptor Elite and Omni THq tissue homogenizers, which speed up the time between lysis to extraction and don’t require chemical lysis. Thorough homogenization is necessary for “hard” samples such as seeds, as well as low-input samples that are sensitive to cross-contamination.

    For nucleic acid extractions, Echave highlighted the necessity of using a method with high recovery and the ability to remove contaminants while also reducing the amount of additional fragmentation to the template. He emphasized the importance of automated nucleic acid isolations for their ability to recover increased yields of pure, high-integrity nucleic acids from various sample types. These automated solutions also stand out for their ability to isolate high molecular weight DNA, enabling more challenging downstream assays like long-read sequencing. Echave also stressed the importance of quality assessments post-extractions and post-library preparations. Understanding the sample concentration and level of fragmentation can be useful in assessing sample preparation and avoiding failures.

    During NGS library preparation, Echave shared that different versions of the same workflow can be modified that take into account the difficulty of the sample. For instance, kits like the NEXTFLEX® Rapid XP v2 are suitable for whole genome sequencing with inputs over 100 pg, while additions like the DOPlify® upstream amplification kit allow researchers to increase the starting concentration and accommodate samples with minimal nucleic acid content.

    While there is no universal solution that works for all difficult samples, Echave emphasized theimportance of understanding sample characteristics through literature searches or past experiments, including yield, quality, and potential inhibitors. With this knowledge, researchers can choose the appropriate methodologies for homogenization, extraction, and library preparation.


    Metagenomics and Diverse Samples
    Decreasing costs, increased competition, and technological advancements have driven the diversification of sequencing applications and sample types, explained Sebastian Aguilar Pierlé, Application Development Lead at Inorevia. This diversification presents two main challenges: complex samples requiring nucleic acid enrichment and those with varying amounts of nucleic acid. In many cases, the same sample may present both challenges, as commonly seen in metagenomic studies. “The challenges associated with metagenomic samples start with their composition, as they are often tough to lyse and rich in inhibitors,” Pierlé stated. “Furthermore, these samples’ complexity confronts us with the challenge of finding the needle in the metagenomic haystack1.”

    These issues were highlighted in a project between Inorevia and researchers from the University of Chile that aimed to sequence the guano virome from a native bat species. This task required enriching the RNA viruses from a diverse mixture containing eukaryotic, bacterial, and archaeal nucleic acids. Traditional sequencing without enrichment yielded a single viral contig and many of the samples were below the detection threshold for fluorometric quantification, highlighting the struggle with complex samples and limited starting material.

    “Our flexible omics platform, Magelia®, integrates characteristics and technologies that provide answers for these challenges,” Pierlé noted. The platform uses capillary reactions under oil to enhance reaction kinetics and prevent evaporation, preserving molecules of interest throughout a precise library preparation process. In addition, Magelia employs a patented magnetic tweezer technology for precise magnetic bead manipulation, eliminating nucleic acid loss and reducing background noise such as adapter dimers. These innovations reduce the needed amplification cycles, avoid duplication issues, and allow researchers to process limited amounts of material while preserving resolution and data quality.

    Pierlé shared that Inorevia continues to advance the Magelia platform by routinely optimizing protocols, expanding its application portfolio, and integrating it with third-party chemistries. These changes allow the instrument to support a wide range of applications and include improved protocols for minimal sample input, reduced fragmentation times, advanced thermal cycling, and adjustable insert sizes for genome sequencing.

    The platform's effectiveness was demonstrated in the bat guano virome study, where it outperformed traditional methods and produced the most detailed data set to date for such samples2. This success led to another collaboration to characterize the RNA virome of Antarctic scavenger birds, where a significant level of transcriptomic resolution was achieved, revealing a diversity of viruses likely connected to its interactions with various Antarctic wildlife3. “This represents a great example of how we work toward providing the scientific community with applications that enable them to obtain answers where routine methods have failed them, which at the end of the day, is what we are all about,” Pierlé emphasized.


    Innovations in FFPE Sample Prep
    FFPE samples are notoriously difficult to work with due to the fixation and embedding process involved in their preparation, which introduces extensive damage to the sample and compromises the quality and quantity of extractable nucleic acid. Margaret Heider, NGS Development Scientist at New England Biolabs, described that common types of DNA damage prevalent in FFPE samples include fragmentation, nicks, gaps, abasic sites, cytosine deamination, and oxidative damage. These types of damage have been shown to be a leading cause of sequencing errors that can disrupt critical applications like variant identification4.

    Despite the difficulties they pose, FFPE samples are still widely used due to their applications in clinical histology settings and their longevity, which allows for retrospective studies. Recognizing the importance of obtaining accurate sequencing data from FFPE DNA samples, Heider and the team at NEB have specialized in developing products optimized for these troublesome samples. In particular, their NEBNext FFPE DNA Repair v2 kit was designed to improve the quality of sequencing data from these samples. Unlike other approaches that simply remove the damaged bases, this kit repairs FFPE-induced damage, increases library yields, and offers a more streamlined workflow.

    Further developments from the NEB team led to the introduction of the NEBNext Ultrashear® FFPE DNA Library Prep Kit. This kit is equipped with a new enzymatic fragmentation mix, an enhanced PCR master mix, as well as the FFPE DNA Repair v2. Notably, the fragmentation mix enzymatically fragments samples while significantly improving data quality. The PCR master mix mitigates common issues like over-amplification and GC bias for more consistent and reliable library preparation across a wide range of sample qualities. Heider also acknowledged the broader applicability of these kits, stating that researchers have used them with other challenging samples to solve their problems, such as museum or forensic specimens.

    In addition to these recommendations, Heider shared that some researchers may choose to assess the extent of fragmentation before beginning a library prep through qPCR or automated electrophoresis. While this may be helpful in some situations, it requires extra material and time that many researchers cannot afford, and may not always predict sequencing challenges like artifacts. Heider also highlighted the importance of bioinformatics in managing FFPE sample data, noting the need for careful sequencing planning and analysis to account for uneven coverage and potential false positives. Finally, she encouraged researchers to stay open to the potential of FFPE samples and suggested that with the right preparation and analysis methods, valuable data can still be extracted from these challenging samples.


    DNA Size Selection
    Jared Slobodan, R&D Manager of Ranger Technology at Yourgene Health, explained that alongside FFPE samples, mixed and fragmented samples like those from liquid biopsy present major challenges for sequencing. For example, he noted that these challenges are obvious in complex samples like those from pregnant women, which contain a mixture of both cell-free fetal DNA (cffDNA), via the placenta, and cell-free DNA from the mother. Similarly, liquid biopsy samples for tumor detection contain substantially elevated levels of germline DNA relative to the tumor DNA being interrogated. In both of these examples, size selecting smaller DNA fragments can notably increase the proportion of fetal or tumor DNA in the sequenced samples. This is true even when rapid processing and blood-stabilization tubes are used to minimize the degradation of germline DNA.

    Slobodan explained that, in other cases, size selection helps to reduce wasted sequencing reads from lower molecular weight material, which are preferentially sequenced due to their smaller size. “This can boost sequencing efficiency, improve assembly, and enable results from samples where this previously wasn’t possible.” This is most obvious in long-read sequencing applications where the presence of shorter fragments can dramatically reduce the data output of a sequencing run. Size selection is, therefore, beneficial for recovering fragment lengths of varying sizes, depending on the requirements of the sample type, for both long- and short-read sequencing applications.

    Yourgene Health’s solution for efficient size selection is their Ranger® Technology. Supporting the selection of short and long fragments, Ranger can significantly enhance sequencing workflows by excluding unwanted material and optimizing fragment sizes for any sequencing platform. Slobodan pointed out that Ranger improves whole genome sequencing by providing precise size selection, which increases data quality and prevents shorter DNA fragments from hindering the access of information-rich fragments to detection centers, including nanopores and zero-mode waveguides.

    Researchers have successfully employed Ranger in oncology applications by separating ctDNA from circulating cfDNA samples, which enhanced the detection of low-frequency tumor variants and reduced sequencing errors5. In infectious disease diagnostics, size selection with the technology increased the sensitivity of viral detection by enriching viral cfDNA fragments up to 16.6-fold6. Additionally, studies have shown that, when used in combination with size selection, employing EDTA tubes can match or exceed the performance of specialized tubes in isolating cfDNA for NIPT and enhance the fetal fraction even under challenging conditions7.

    Looking ahead, Slobodan believes the integration of size selection, sample QC, and automation offered by their technology will significantly benefit long-read sequencing library construction, especially in high-throughput environments. He also noted that the team enjoys working with customers and learning how they are using their technology. “We are often surprised and excited by what we find out.”

    References
    1. Soueidan, H., Schmitt, L. A., Candresse, T., & Nikolski, M. (2015). Finding and identifying the viral needle in the metagenomic haystack: trends and challenges. Frontiers in Microbiology, 5, 739. https://doi.org/10.3389/fmicb.2014.00739
    2. Aguilar Pierlé, S., Zamora, G., Ossa, G., Gaggero, A., & Barriga, G. P. (2022). The Myotis chiloensis Guano Virome: Viral Nucleic Acid Enrichments for High-Resolution Virome Elucidation and Full Alphacoronavirus Genome Assembly. Viruses, 14(2), 202. https://doi.org/10.3390/v14020202
    3. Zamora, G., Aguilar Pierlé, S., Loncopan, J., Araos, L., Verdugo, F., Rojas-Fuentes, C., Krüger, L., Gaggero, A., & Barriga, G. P. (2023). Scavengers as Prospective Sentinels of Viral Diversity: the Snowy Sheathbill Virome as a Potential Tool for Monitoring Virus Circulation, Lessons from Two Antarctic Expeditions. Microbiology Spectrum, 11(3), e0330222. https://doi.org/10.1128/spectrum.03302-22
    4. Chen, L., Liu, P., Evans, T. C., Jr, & Ettwiller, L. M. (2017). DNA damage is a pervasive cause of sequencing errors, directly confounding variant identification. Science (New York, N.Y.), 355(6326), 752–756. https://doi.org/10.1126/science.aai8690
    5. Hellwig, S., Nix, D. A., Gligorich, K. M., O'Shea, J. M., Thomas, A., Fuertes, C. L., Bhetariya, P. J., Marth, G. T., Bronner, M. P., & Underhill, H. R. (2018). Automated size selection for short cell-free DNA fragments enriches for circulating tumor DNA and improves error correction during next-generation sequencing. PloS One, 13(7), e0197333. https://doi.org/10.1371/journal.pone.0197333
    6. Phung, Q., Lin, M. J., Xie, H., & Greninger, A. L. (2022). Fragment Size-Based Enrichment of Viral Sequences in Plasma Cell-Free DNA. The Journal of Molecular Diagnostics: JMD, 24(5), 476–484. https://doi.org/10.1016/j.jmoldx.2022.01.007
    7. Daryabari, S. S., Giroux, S., Caron, A., Chau, B., Langlois, S., & Rousseau, F. (2022). Improving Fetal Fraction of Noninvasive Prenatal Screening Samples Collected in EDTA-Gel Tubes Using Gel Size Selection: A Head-To-Head Comparison of Methods. The Journal of Molecular Diagnostics : JMD, 24(9), 955–962. https://doi.org/10.1016/j.jmoldx.2022.06.004
      Please sign into your account to post comments.

    About the Author

    Collapse

    seqadmin Benjamin Atha holds a B.A. in biology from Hood College and an M.S. in biological sciences from Towson University. With over 9 years of hands-on laboratory experience, he's well-versed in next-generation sequencing systems. Ben is currently the editor for SEQanswers. Find out more about seqadmin

    Latest Articles

    Collapse

    • Essential Discoveries and Tools in Epitranscriptomics
      by seqadmin




      The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
      04-22-2024, 07:01 AM
    • Current Approaches to Protein Sequencing
      by seqadmin


      Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
      04-04-2024, 04:25 PM
    • Strategies for Sequencing Challenging Samples
      by seqadmin


      Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
      03-22-2024, 06:39 AM

    ad_right_rmr

    Collapse

    News

    Collapse

    Working...
    X