Announcement

Collapse

Welcome to the New Seqanswers!

Welcome to the new Seqanswers! We'd love your feedback, please post any you have to this topic: New Seqanswers Feedback.
See more
See less

Trimming stringency w/ excessive data

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Trimming stringency w/ excessive data

    I'm in the process of assembling a 500Mb animal genome and have both Illumina and PacBio data to work with using a hybrid assembly approach. Generally when I assemble de novo transcriptomes or genomes, I try to not be too "aggressive" with my quality trimming parameters for raw Illumina reads, running something along the lines of:

    Code:
    bbduk.sh in=file1.fq in2=file2.fq out=trimmed.fq qtrim=rl trimq=10 (plus some adapter trimming parameters)
    However, we sequenced our Illumina libraries incredibly deep and have something on the order of 320x coverage untrimmed and ~270x coverage trimmed using the parameters above (plus some adapter trimming). This is still about twice as much coverage as I would ever generally use for genome assembly. Right now I'm subsetting my trimmed reads to about 100x coverage for hybrid assembly with my PacBio data. I was wondering what folks think about making my quality trimming parameters more stringent given that I have so much excess data to have a higher-quality set of reads on average. For example,

    Code:
    bbduk.sh in=file1.fq in2=file2.fq out=trimmed.fq qtrim=rl trimq=15
    This brings me down to a little over 200x coverage, which I would still subset to 100x coverage-worth of reads from using reformat.sh. I know that this isn't generally recommended, but I've never been quite sure if this is because of lost coverage (which wouldn't be an issue here) or because of something inherently unusual about how higher-quality reads are handled by assemblers. Thus far, my assembly results using the "qtrim=rl trimq=10" parameters seem reasonable, I'm mostly just curious.
    Last edited by adamrork; 12-12-2019, 08:49 PM.

  • #2
    Rather than doing the filtering you should consider normalizing your data. For this BBMap has another program called bbnorm.sh. You can find a guide here.

    Comment


    • #3
      Originally posted by GenoMax View Post
      Rather than doing the filtering you should consider normalizing your data. For this BBMap has another program called bbnorm.sh. You can find a guide here.
      Ah, great point! I'll do that. Thank you!

      Comment

      Working...
      X