Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • "Re-inflating Partioned" Metagenomic Data - KHMER

    Hello,

    My question has two parts. Starting with the most important question:

    1) I've read that short-read assemblers designed for metagenomic data make use of read abundance in the assembly process. However, I like the idea of digital normalization (using khmer) as a tool for bringing out reads from genomes which are less abundant in the mix. Is it a wise idea to perform digital normalization and then use assemblers geared for metagenomic data, like MetaVelvet, IDBA_UD or RAY-meta?

    2) I have already attempted to partition my metagenomic data using khmer to improve assembly, however found it is very very computationally intensive and slow. The authors of khmer seem to suggest that one viable method would be to perform digital normalization, then partitioning and then "re-inflated" your reads to pre-normalized abundances (see khmer documentation (HERE). I am keen to try that, but the script ("sweep-reads3.py") is no longer comes prepackaged with the new khmer release. I did find it on their git-hub account, HERE, but wonder why it is no longer packaged with the release. Before investing time and energy, I was wondering if anyone has thoughts on this?

    Thanks in advance

  • #2
    Originally posted by roliwilhelm View Post
    Hello,

    My question has two parts. Starting with the most important question:

    1) I've read that short-read assemblers designed for metagenomic data make use of read abundance in the assembly process. However, I like the idea of digital normalization (using khmer) as a tool for bringing out reads from genomes which are less abundant in the mix. Is it a wise idea to perform digital normalization and then use assemblers geared for metagenomic data, like MetaVelvet, IDBA_UD or RAY-meta?
    I think normalization would be more useful when assembling metagenomic data with a normal assembler. I've found it to improve metagenome assemblies with Soap and Velvet, for example, but have not tried it with metagenome assemblers.

    2) I have already attempted to partition my metagenomic data using khmer to improve assembly, however found it is very very computationally intensive and slow. The authors of khmer seem to suggest that one viable method would be to perform digital normalization, then partitioning and then "re-inflated" your reads to pre-normalized abundances (see khmer documentation (HERE). I am keen to try that, but the script ("sweep-reads3.py") is no longer comes prepackaged with the new khmer release. I did find it on their git-hub account, HERE, but wonder why it is no longer packaged with the release. Before investing time and energy, I was wondering if anyone has thoughts on this?

    Thanks in advance
    BBNorm is much faster than khmer, has less bias toward error reads, and it also supports partitioning - rather than normalizing, you can split data into a low coverage bin, medium coverage bin, and high-coverage bin, with custom cutoffs. That would make a lot more sense, computationally.

    For example:
    bbnorm.sh passes=1 in=reads.fq outlow=low.fq outmid=mid.fq outhigh=high.fq lowbindepth=50 highbindepth=200
    That will divide the data into coverage 1-50, 51-199, and 200+.

    To normalize to depth 50, the command would be:
    bbnorm.sh in=reads.fq out=normalized.fq target=50

    Also, there is an alternative to normalization, that works like this:

    Downsample to 1% depth.
    Assemble.
    Map to assembly and keep reads that don't map.
    Downsample unmapped reads to 10% depth.
    Assemble.
    Combine assemblies (with a tool like Dedupe to prevent redundant contigs).
    Map to combined assembly.
    ...
    etc.

    It's not clear that either is universally better; both approaches have advantages and disadvantages. You can downsample with reformat, also in the BBTools package, like this:
    reformat.sh in=reads.fq out=sampled.fq samplerate=0.01
    Last edited by Brian Bushnell; 04-30-2014, 03:28 PM.

    Comment


    • #3
      Worth mentioning that the partitioning that BBNorm appears to use is coverage-based. khmer uses a (simplified) de Bruijn graph-based partitioning (separating into disconnected partitions of the graph). That's a very important distinction between the two.
      Last edited by cjfields; 05-05-2014, 12:02 PM.

      Comment


      • #4
        Originally posted by cjfields View Post
        Worth mentioning that the partitioning that BBNorm appears to use is coverage-based. khmer uses a (simplified) de Bruijn graph-based partitioning (separating into disconnected partitions of the graph). That's a very important distinction between the two.
        Thanks for pointing that out; I was unaware that khmer's partitioning was NOT coverage-based. Yes, BBNorm's partitioning is purely coverage-based and will not be useful except in situations where you have multiple organisms (or organelles, plasmids, etc) with highly different coverage, though that's typically the case in metagenomes.

        That said - I've found partitioning by connectivity (overlap, debruijn, etc) in metagenomes to be problematic with short (~100bp) reads; the situation can easily devolve into a single cluster because everything will be connected by a single highly-conserved element, like a 16s subsequence. With longer (~250bp) reads it seems to work better.

        Comment


        • #5
          The current and up-to-date method for metagenomic assembly using khmer is kept at http://khmer-protocols.readthedocs.org

          Comment


          • #6
            Reinflation isn't necessary.

            Hi all, we now have direct evidence that digital normalization works fine with both SPAdes and IDBA on at least one mock community data set -- neither assembler seems to do poorly on it. I haven't written it up for a blog post yet but I'm happy to send the numbers your way if you're interested.

            --titus

            Comment


            • #7
              Originally posted by titusbrown View Post
              Hi all, we now have direct evidence that digital normalization works fine with both SPAdes and IDBA on at least one mock community data set -- neither assembler seems to do poorly on it. I haven't written it up for a blog post yet but I'm happy to send the numbers your way if you're interested.

              --titus
              That's awesome, thanks for checking this out Titus!

              -chris

              Comment

              Latest Articles

              Collapse

              • seqadmin
                Latest Developments in Precision Medicine
                by seqadmin



                Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

                Somatic Genomics
                “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
                05-24-2024, 01:16 PM
              • seqadmin
                Recent Advances in Sequencing Analysis Tools
                by seqadmin


                The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
                05-06-2024, 07:48 AM

              ad_right_rmr

              Collapse

              News

              Collapse

              Topics Statistics Last Post
              Started by seqadmin, 05-24-2024, 07:15 AM
              0 responses
              93 views
              0 likes
              Last Post seqadmin  
              Started by seqadmin, 05-23-2024, 10:28 AM
              0 responses
              105 views
              0 likes
              Last Post seqadmin  
              Started by seqadmin, 05-23-2024, 07:35 AM
              0 responses
              106 views
              0 likes
              Last Post seqadmin  
              Started by seqadmin, 05-22-2024, 02:06 PM
              0 responses
              11 views
              0 likes
              Last Post seqadmin  
              Working...
              X