Header Leaderboard Ad

Collapse

Trimming stringency w/ excessive data

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Trimming stringency w/ excessive data

    I'm in the process of assembling a 500Mb animal genome and have both Illumina and PacBio data to work with using a hybrid assembly approach. Generally when I assemble de novo transcriptomes or genomes, I try to not be too "aggressive" with my quality trimming parameters for raw Illumina reads, running something along the lines of:

    Code:
    bbduk.sh in=file1.fq in2=file2.fq out=trimmed.fq qtrim=rl trimq=10 (plus some adapter trimming parameters)
    However, we sequenced our Illumina libraries incredibly deep and have something on the order of 320x coverage untrimmed and ~270x coverage trimmed using the parameters above (plus some adapter trimming). This is still about twice as much coverage as I would ever generally use for genome assembly. Right now I'm subsetting my trimmed reads to about 100x coverage for hybrid assembly with my PacBio data. I was wondering what folks think about making my quality trimming parameters more stringent given that I have so much excess data to have a higher-quality set of reads on average. For example,

    Code:
    bbduk.sh in=file1.fq in2=file2.fq out=trimmed.fq qtrim=rl trimq=15
    This brings me down to a little over 200x coverage, which I would still subset to 100x coverage-worth of reads from using reformat.sh. I know that this isn't generally recommended, but I've never been quite sure if this is because of lost coverage (which wouldn't be an issue here) or because of something inherently unusual about how higher-quality reads are handled by assemblers. Thus far, my assembly results using the "qtrim=rl trimq=10" parameters seem reasonable, I'm mostly just curious.
    Last edited by adamrork; 12-12-2019, 08:49 PM.

  • #2
    Rather than doing the filtering you should consider normalizing your data. For this BBMap has another program called bbnorm.sh. You can find a guide here.

    Comment


    • #3
      Originally posted by GenoMax View Post
      Rather than doing the filtering you should consider normalizing your data. For this BBMap has another program called bbnorm.sh. You can find a guide here.
      Ah, great point! I'll do that. Thank you!

      Comment

      Latest Articles

      Collapse

      • seqadmin
        A Brief Overview and Common Challenges in Single-cell Sequencing Analysis
        by seqadmin


        ​​​​​​The introduction of single-cell sequencing has advanced the ability to study cell-to-cell heterogeneity. Its use has improved our understanding of somatic mutations1, cell lineages2, cellular diversity and regulation3, and development in multicellular organisms4. Single-cell sequencing encompasses hundreds of techniques with different approaches to studying the genomes, transcriptomes, epigenomes, and other omics of individual cells. The analysis of single-cell sequencing data i...

        01-24-2023, 01:19 PM
      • seqadmin
        Introduction to Single-Cell Sequencing
        by seqadmin
        Single-cell sequencing is a technique used to investigate the genome, transcriptome, epigenome, and other omics of individual cells using high-throughput sequencing. This technology has provided many scientific breakthroughs and continues to be applied across many fields, including microbiology, oncology, immunology, neurobiology, precision medicine, and stem cell research.

        The advancement of single-cell sequencing began in 2009 when Tang et al. investigated the single-cell transcriptomes
        ...
        01-09-2023, 03:10 PM

      ad_right_rmr

      Collapse
      Working...
      X