Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Multiple read QC steps (trimming, filtering etc) in one go ... what's the best way?

    Hi,

    I know there are many versatile tools (bbduck, trimmomatic etc. to name a a few) that can trim low quality bases, adapters etc. I wonder what would be the best way to do the followings with a single command or pipeline:
    1) Adapter/Quality Trimming and Filtering
    2) removing reads with greater than 5% N’s
    3) removing reads where 20% or more of the calls were considered low quality bases
    4) removing duplicated reads
    If still I am not asking for too much, perhaps :-)
    5) error correcting reads as well!

    Thanks.

  • #2
    That's a tall order, considering some of it is per-read and some is dependent on the entirety of your data. There is no tool of which I am aware that can do it all in a single command.

    1) BBDuk ("qtrim + trimq" and "ktrim=r + ref" flags). It comes with Truseq and Nextera adapter files.
    2) BBDuk ("maxns" flag)
    3) BBDuk ("maq" flag - that stands for 'min average quality'). It's also possible to screen by %ID using BBMap, though, if you have a reference. If you want to rely on the sequencer's accuracy estimation, BBDuk's "maq" filters by overall expected error rate, so "maq=10" (phred-scaled) will eliminate reads in which at least 10% of the bases are expected to be incorrect. For 20%, the flag would be "maq=7".

    #1-3 can be done in one command by BBDuk. The rest cannot.

    4) It's important to note whether you have a reference. This can be done via mapping with various tools, or via matching with tools like Dedupe (which allows inexact matches but is usually less sensitive than mapping to a reference). Dedupe takes pairing into account, but it also uses a substantial amount of memory, for large libraries. Mapping-based tools require a reference, but less memory.
    5) Error-correction requires consensus, and is much slower and more subjective than the other operations. There are various tools for this - I recommend BBNorm, with a command like "ecc.sh in=reads.fq out=corrected.fq". But there are other tools, such as Musket. Compared to other tool categories, I would be most worried about error-correction, as it is more subjective and has a greater chance of biasing your results. I do not recommend it except where necessary (such as when you have a huge amount of data, or very high substitution-type error rate, or highly variable coverage, as in amplified single-cell data). BBNorm does not use more memory or go slower as a result of more data.

    Some of these functions are also possible in Trimmomatic and Cutadapt. Deduplication, if you have a reference, can also be done by samtools in conjunction with any pair-aware mapping program, using far less memory than Dedupe, though much more time.
    Last edited by Brian Bushnell; 09-24-2014, 12:59 AM.

    Comment


    • #3
      I don't have a lot of experience with the software mentioned, but I want to put in my two cents on error correction and quality filtering generally.
      These two things are usually considered separate steps in the bioinformatic workflow, but in my mind they need to be considered at the same time, as it is difficult to make an accurate call about error-correction if you've already trimmed off your low quality base calls.

      SAMtools will probably solve a lot of your problems when used in conjunction with more specialised modules, depending on whether you have a reference, etc.

      Comment


      • #4
        @Brian: I really appreciate your comprehensive reply. I am involved in de novo assembly of moderate sized plant genomes without any close reference. Dealing with massive volumes of data (multiple PE and MP libraries) and each time going through the entire QC steps mentioned above. Removing duplicate doesn't seem to have much impact on the assembly outcome and I am thinking to ignore it for now. so as you suggested 1,2,3 in the first round and 5 in the second seems the way to go for now.

        Comment


        • #5
          Agreed, that sounds like a good workflow. As for duplicate removal, it's not really relevant unless your data has been PCR-amplified; and it's more useful for re-sequencing/variation-calling than de-novo assembly.

          Comment

          Latest Articles

          Collapse

          • seqadmin
            Exploring the Dynamics of the Tumor Microenvironment
            by seqadmin




            The complexity of cancer is clearly demonstrated in the diverse ecosystem of the tumor microenvironment (TME). The TME is made up of numerous cell types and its development begins with the changes that happen during oncogenesis. “Genomic mutations, copy number changes, epigenetic alterations, and alternative gene expression occur to varying degrees within the affected tumor cells,” explained Andrea O’Hara, Ph.D., Strategic Technical Specialist at Azenta. “As...
            07-08-2024, 03:19 PM
          • seqadmin
            Exploring Human Diversity Through Large-Scale Omics
            by seqadmin


            In 2003, researchers from the Human Genome Project (HGP) announced the most comprehensive genome to date1. Although the genome wasn’t fully completed until nearly 20 years later2, numerous large-scale projects, such as the International HapMap Project and 1000 Genomes Project, continued the HGP's work, capturing extensive variation and genomic diversity within humans. Recently, newer initiatives have significantly increased in scale and expanded beyond genomics, offering a more detailed...
            06-25-2024, 06:43 AM

          ad_right_rmr

          Collapse

          News

          Collapse

          Topics Statistics Last Post
          Started by seqadmin, Yesterday, 05:49 AM
          0 responses
          12 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 07-15-2024, 06:53 AM
          0 responses
          23 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 07-10-2024, 07:30 AM
          0 responses
          36 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 07-03-2024, 09:45 AM
          0 responses
          204 views
          0 likes
          Last Post seqadmin  
          Working...
          X