Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Markiyan
    replied
    Any chances of including the fasta support for aminoacid sequences?

    Dear Brian,

    Thanks you very much for the tool, can be very helpful for io bound cloud folks.

    Are there any plans for including fasta support for aminoacid sequences
    (group the similar proteins together)?

    Must support very long fasta ID lines - up to 10Kb.

    Leave a comment:


  • GenoMax
    replied
    If all data can fit in memory, Clumpify needs the amount of time it takes to read and write the file once. If the data cannot fit in memory, it takes around twice that long.
    Is there a way to force clumpify to use just memory (if enough is available) instead of writing to disk?

    Edit: On second thought that may not be practical/useful but I will leave the question in for now to see if @Brian has any pointers.

    For a 12G input gziped fastq file, clumpify made 28 temp files (each between 400-600M in size).

    Edit 2: Final file size was 6.8G so a significant reduction in size.
    Last edited by GenoMax; 12-06-2016, 12:38 PM.

    Leave a comment:


  • vout
    replied
    Originally posted by Brian Bushnell View Post
    In my tests, assembly with Spades and Megahit have time reductions from using Clumpified input that more than pays for the time needed to run Clumpify, largely because both are multi-kmer assemblers which read the input file multiple times. Something purely CPU-limited like mapping would normally not benefit much in terms of speed (though still a bit due to improved cache locality).
    In fact, Megahit does not read the input files multiple times. It converts the fastq/a files into a binary format and read the binary file multiple times. I guess that cache locality is the key. Imagine that the same group of k-mers are processed (in different components of Megahit -- graph construction: assign k-mers to different buckets then sorting; local assembly & extracting iterative k-mers: insert k-mers into a hash table)... In this regard, alignment tools may also benefit from it substantially.

    Great work Brian.

    Leave a comment:


  • Brian Bushnell
    replied
    Originally posted by GenoMax View Post
    Can this be extended to identify PCR-duplicates and optionally flag or eliminate them?
    That's a good idea; I'll add that. The speed would still be similar to Dedupe, but it would eliminate the memory requirement.

    Would piping output of clumpify into dedupe achieve fast de-duplication?
    Hmmm, you certainly could do that, but I don't think it would be overly useful. Piping Clumpify to Dedupe would end up making the process slower overall, and Dedupe reorders the reads randomly so it would lose the benefit of running Clumpify. I guess I really need to add an "ordered" option to Dedupe; I'll try to do that next week.

    Leave a comment:


  • GenoMax
    replied
    Going to put in a plug for tens of other things BBMap suite members can do. A compilation is available in this thread.

    Leave a comment:


  • GenoMax
    replied
    Can this be extended to identify PCR-duplicates and optionally flag or eliminate them?

    Would piping output of clumpify into dedupe achieve fast de-duplication?

    Leave a comment:


  • Introducing Clumpify: Create 30% Smaller, Faster Gzipped Fastq Files

    I'd like to introduce a new member of the BBMap package, Clumpify. This is a bit different from other tools in that it does not actually change your data at all, simply reorders it to maximize gzip compression. Therefore, the output files are still fully-compatible gzipped fastq files, and Clumpify has no effect on downstream analysis aside from making it faster. It’s quite simple to use:

    Code:
    [B]clumpify.sh in=reads.fq.gz out=clumped.fq.gz reorder[/B]
    This command assumes paired, interleaved reads or single-ended reads; Clumpify does not work with paired reads in twin files (they would need to be interleaved first). You can, of course, first interleave twin files into a single file with Reformat, clumpify them, and then de-interleave the output into twin files, and still gain the compression advantages.

    How does this work? Clumpify operates on a similar principle to that which makes sorted bam files smaller than unsorted bam files – the reads are reordered so that reads with similar sequence are nearby, which makes gzip compression more efficient. But unlike sorted bam, during this process, pairs are kept together so that an interleaved file will remain interleaved with pairing intact. Also unlike a sorted bam, it does not require mapping or a reference, and except in very unusual cases, can be done with an arbitrarily small amount of memory. So, it’s very fast and memory-efficient compared to mapping, and can be done with no knowledge of what organism(s) the reads came from.

    Internally, Clumpify forms clumps of reads sharing special ‘pivot’ kmers, implying that those reads overlap. These clumps are then further sorted by position of the kmer in the read so that within a clump the reads are position-sorted. The net result is a list of sorted clumps of reads, yielding compression within a percent or so of sorted bam.

    How long does Clumpify take? It's very fast. If all data can fit in memory, Clumpify needs the amount of time it takes to read and write the file once. If the data cannot fit in memory, it takes around twice that long.

    Why does this increase speed? There are a lot of processes that are I/O limited. For example, on a multicore processor, using BBDuk, BBMerge, Reformat, etc. on a gzipped fastq will generally be rate-limited by gzip decompression (even if you use pigz, which is much faster at decompression than gzip). Gzip decompression seems to be rate-limited by the number of input bytes per second rather than output, meaning that a raw file of a given size will decompress X% faster if it is compressed Y% smaller; here X and Y are proportional, though not quite 1-to-1. In my tests, assembly with Spades and Megahit have time reductions from using Clumpified input that more than pays for the time needed to run Clumpify, largely because both are multi-kmer assemblers which read the input file multiple times. Something purely CPU-limited like mapping would normally not benefit much in terms of speed (though still a bit due to improved cache locality).

    When and how should Clumpify be used? If you want to clumpify data for compression, do it as early as possible (e.g. on the raw reads). Then run all downstream processing steps ensuring that read order is maintained (e.g. use the “ordered” flag if you use BBDuk for adapter-trimming) so that the clump order is maintained; thus, all intermediate files will benefit from the increased compression and increased speed. I recommend running Clumpify on ALL data that will ever go into long-term storage, or whenever there is a long pipeline with multiple steps and intermediate gzipped files. Also, even when data will not go into long-term storage, if a shared filesystem is being used or files need to be sent over the internet, running Clumpify as early as possible will conserve bandwidth. The only times I would not clumpify data are enumerated below.

    When should Clumpify not be used? There are a few cases where it probably won’t help:

    1) For reads with a very low kmer depth, due to either very low coverage (like 1x WGS) or super-high-error-rate (like raw PacBio data). It won’t hurt anything but won’t accomplish anything either.

    2) For large volumes of amplicon data. This may work, and may not work; but if all of your reads are expected to share the same kmers, they may all form one giant clump and again nothing will be accomplished. Again, it won’t hurt anything, and if pivots are randomly selected from variable regions, it might increase compression.

    3) When your process is dependent on the order of reads. If you always grab the first million reads from a file assuming they are a good representation of the rest of the file, Clumpify will cause your assumption to be invalid – just like grabbing the first million reads from a sorted bam file would not be representative. Fortunately, this is never a good practice so if you are currently doing that, now would be a good opportunity to change your pipeline anyway. Randomly subsampling is a much better approach.

    4) If you are only going to read data fewer than ~3 times, it will never go into long-term storage, and it's being used on local disk so bandwidth is not an issue, there's no point in using Clumpify (or gzip, for that matter).

    As always, please let me know if you have any questions, and please make sure you are using the latest version of BBTools when trying new functionality.


    P.S. For maximal compression, you can output bzipped files by using the .bz2 extension instead of .gz, if bzip2 or pbzip2 is installed. This is actually pretty fast if you have enough cores and pbzip2 installed, and furthermore, with enough cores, it decompresses even faster than gzip. This increases compression by around 9%.
    Last edited by Brian Bushnell; 12-04-2016, 12:23 AM.

Latest Articles

Collapse

  • seqadmin
    Recent Advances in Sequencing Analysis Tools
    by seqadmin


    The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
    05-06-2024, 07:48 AM
  • seqadmin
    Essential Discoveries and Tools in Epitranscriptomics
    by seqadmin




    The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
    04-22-2024, 07:01 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 05-14-2024, 07:03 AM
0 responses
15 views
0 likes
Last Post seqadmin  
Started by seqadmin, 05-10-2024, 06:35 AM
0 responses
37 views
0 likes
Last Post seqadmin  
Started by seqadmin, 05-09-2024, 02:46 PM
0 responses
45 views
0 likes
Last Post seqadmin  
Started by seqadmin, 05-07-2024, 06:57 AM
0 responses
39 views
0 likes
Last Post seqadmin  
Working...
X