Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Multiple read QC steps (trimming, filtering etc) in one go ... what's the best way?

    Hi,

    I know there are many versatile tools (bbduck, trimmomatic etc. to name a a few) that can trim low quality bases, adapters etc. I wonder what would be the best way to do the followings with a single command or pipeline:
    1) Adapter/Quality Trimming and Filtering
    2) removing reads with greater than 5% N’s
    3) removing reads where 20% or more of the calls were considered low quality bases
    4) removing duplicated reads
    If still I am not asking for too much, perhaps :-)
    5) error correcting reads as well!

    Thanks.

  • #2
    That's a tall order, considering some of it is per-read and some is dependent on the entirety of your data. There is no tool of which I am aware that can do it all in a single command.

    1) BBDuk ("qtrim + trimq" and "ktrim=r + ref" flags). It comes with Truseq and Nextera adapter files.
    2) BBDuk ("maxns" flag)
    3) BBDuk ("maq" flag - that stands for 'min average quality'). It's also possible to screen by %ID using BBMap, though, if you have a reference. If you want to rely on the sequencer's accuracy estimation, BBDuk's "maq" filters by overall expected error rate, so "maq=10" (phred-scaled) will eliminate reads in which at least 10% of the bases are expected to be incorrect. For 20%, the flag would be "maq=7".

    #1-3 can be done in one command by BBDuk. The rest cannot.

    4) It's important to note whether you have a reference. This can be done via mapping with various tools, or via matching with tools like Dedupe (which allows inexact matches but is usually less sensitive than mapping to a reference). Dedupe takes pairing into account, but it also uses a substantial amount of memory, for large libraries. Mapping-based tools require a reference, but less memory.
    5) Error-correction requires consensus, and is much slower and more subjective than the other operations. There are various tools for this - I recommend BBNorm, with a command like "ecc.sh in=reads.fq out=corrected.fq". But there are other tools, such as Musket. Compared to other tool categories, I would be most worried about error-correction, as it is more subjective and has a greater chance of biasing your results. I do not recommend it except where necessary (such as when you have a huge amount of data, or very high substitution-type error rate, or highly variable coverage, as in amplified single-cell data). BBNorm does not use more memory or go slower as a result of more data.

    Some of these functions are also possible in Trimmomatic and Cutadapt. Deduplication, if you have a reference, can also be done by samtools in conjunction with any pair-aware mapping program, using far less memory than Dedupe, though much more time.
    Last edited by Brian Bushnell; 09-24-2014, 12:59 AM.

    Comment


    • #3
      I don't have a lot of experience with the software mentioned, but I want to put in my two cents on error correction and quality filtering generally.
      These two things are usually considered separate steps in the bioinformatic workflow, but in my mind they need to be considered at the same time, as it is difficult to make an accurate call about error-correction if you've already trimmed off your low quality base calls.

      SAMtools will probably solve a lot of your problems when used in conjunction with more specialised modules, depending on whether you have a reference, etc.

      Comment


      • #4
        @Brian: I really appreciate your comprehensive reply. I am involved in de novo assembly of moderate sized plant genomes without any close reference. Dealing with massive volumes of data (multiple PE and MP libraries) and each time going through the entire QC steps mentioned above. Removing duplicate doesn't seem to have much impact on the assembly outcome and I am thinking to ignore it for now. so as you suggested 1,2,3 in the first round and 5 in the second seems the way to go for now.

        Comment


        • #5
          Agreed, that sounds like a good workflow. As for duplicate removal, it's not really relevant unless your data has been PCR-amplified; and it's more useful for re-sequencing/variation-calling than de-novo assembly.

          Comment

          Latest Articles

          Collapse

          • seqadmin
            Best Practices for Single-Cell Sequencing Analysis
            by seqadmin



            While isolating and preparing single cells for sequencing was historically the bottleneck, recent technological advancements have shifted the challenge to data analysis. This highlights the rapidly evolving nature of single-cell sequencing. The inherent complexity of single-cell analysis has intensified with the surge in data volume and the incorporation of diverse and more complex datasets. This article explores the challenges in analysis, examines common pitfalls, offers...
            06-06-2024, 07:15 AM
          • seqadmin
            Latest Developments in Precision Medicine
            by seqadmin



            Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

            Somatic Genomics
            “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
            05-24-2024, 01:16 PM

          ad_right_rmr

          Collapse

          News

          Collapse

          Topics Statistics Last Post
          Started by seqadmin, Yesterday, 07:24 AM
          0 responses
          10 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 06-13-2024, 08:58 AM
          0 responses
          11 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 06-12-2024, 02:20 PM
          0 responses
          16 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 06-07-2024, 06:58 AM
          0 responses
          184 views
          0 likes
          Last Post seqadmin  
          Working...
          X