Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Introducing Clumpify: Create 30% Smaller, Faster Gzipped Fastq Files

    I'd like to introduce a new member of the BBMap package, Clumpify. This is a bit different from other tools in that it does not actually change your data at all, simply reorders it to maximize gzip compression. Therefore, the output files are still fully-compatible gzipped fastq files, and Clumpify has no effect on downstream analysis aside from making it faster. It’s quite simple to use:

    Code:
    [B]clumpify.sh in=reads.fq.gz out=clumped.fq.gz reorder[/B]
    This command assumes paired, interleaved reads or single-ended reads; Clumpify does not work with paired reads in twin files (they would need to be interleaved first). You can, of course, first interleave twin files into a single file with Reformat, clumpify them, and then de-interleave the output into twin files, and still gain the compression advantages.

    How does this work? Clumpify operates on a similar principle to that which makes sorted bam files smaller than unsorted bam files – the reads are reordered so that reads with similar sequence are nearby, which makes gzip compression more efficient. But unlike sorted bam, during this process, pairs are kept together so that an interleaved file will remain interleaved with pairing intact. Also unlike a sorted bam, it does not require mapping or a reference, and except in very unusual cases, can be done with an arbitrarily small amount of memory. So, it’s very fast and memory-efficient compared to mapping, and can be done with no knowledge of what organism(s) the reads came from.

    Internally, Clumpify forms clumps of reads sharing special ‘pivot’ kmers, implying that those reads overlap. These clumps are then further sorted by position of the kmer in the read so that within a clump the reads are position-sorted. The net result is a list of sorted clumps of reads, yielding compression within a percent or so of sorted bam.

    How long does Clumpify take? It's very fast. If all data can fit in memory, Clumpify needs the amount of time it takes to read and write the file once. If the data cannot fit in memory, it takes around twice that long.

    Why does this increase speed? There are a lot of processes that are I/O limited. For example, on a multicore processor, using BBDuk, BBMerge, Reformat, etc. on a gzipped fastq will generally be rate-limited by gzip decompression (even if you use pigz, which is much faster at decompression than gzip). Gzip decompression seems to be rate-limited by the number of input bytes per second rather than output, meaning that a raw file of a given size will decompress X% faster if it is compressed Y% smaller; here X and Y are proportional, though not quite 1-to-1. In my tests, assembly with Spades and Megahit have time reductions from using Clumpified input that more than pays for the time needed to run Clumpify, largely because both are multi-kmer assemblers which read the input file multiple times. Something purely CPU-limited like mapping would normally not benefit much in terms of speed (though still a bit due to improved cache locality).

    When and how should Clumpify be used? If you want to clumpify data for compression, do it as early as possible (e.g. on the raw reads). Then run all downstream processing steps ensuring that read order is maintained (e.g. use the “ordered” flag if you use BBDuk for adapter-trimming) so that the clump order is maintained; thus, all intermediate files will benefit from the increased compression and increased speed. I recommend running Clumpify on ALL data that will ever go into long-term storage, or whenever there is a long pipeline with multiple steps and intermediate gzipped files. Also, even when data will not go into long-term storage, if a shared filesystem is being used or files need to be sent over the internet, running Clumpify as early as possible will conserve bandwidth. The only times I would not clumpify data are enumerated below.

    When should Clumpify not be used? There are a few cases where it probably won’t help:

    1) For reads with a very low kmer depth, due to either very low coverage (like 1x WGS) or super-high-error-rate (like raw PacBio data). It won’t hurt anything but won’t accomplish anything either.

    2) For large volumes of amplicon data. This may work, and may not work; but if all of your reads are expected to share the same kmers, they may all form one giant clump and again nothing will be accomplished. Again, it won’t hurt anything, and if pivots are randomly selected from variable regions, it might increase compression.

    3) When your process is dependent on the order of reads. If you always grab the first million reads from a file assuming they are a good representation of the rest of the file, Clumpify will cause your assumption to be invalid – just like grabbing the first million reads from a sorted bam file would not be representative. Fortunately, this is never a good practice so if you are currently doing that, now would be a good opportunity to change your pipeline anyway. Randomly subsampling is a much better approach.

    4) If you are only going to read data fewer than ~3 times, it will never go into long-term storage, and it's being used on local disk so bandwidth is not an issue, there's no point in using Clumpify (or gzip, for that matter).

    As always, please let me know if you have any questions, and please make sure you are using the latest version of BBTools when trying new functionality.


    P.S. For maximal compression, you can output bzipped files by using the .bz2 extension instead of .gz, if bzip2 or pbzip2 is installed. This is actually pretty fast if you have enough cores and pbzip2 installed, and furthermore, with enough cores, it decompresses even faster than gzip. This increases compression by around 9%.
    Last edited by Brian Bushnell; 12-04-2016, 12:23 AM.

  • #2
    Can this be extended to identify PCR-duplicates and optionally flag or eliminate them?

    Would piping output of clumpify into dedupe achieve fast de-duplication?

    Comment


    • #3
      Going to put in a plug for tens of other things BBMap suite members can do. A compilation is available in this thread.

      Comment


      • #4
        Originally posted by GenoMax View Post
        Can this be extended to identify PCR-duplicates and optionally flag or eliminate them?
        That's a good idea; I'll add that. The speed would still be similar to Dedupe, but it would eliminate the memory requirement.

        Would piping output of clumpify into dedupe achieve fast de-duplication?
        Hmmm, you certainly could do that, but I don't think it would be overly useful. Piping Clumpify to Dedupe would end up making the process slower overall, and Dedupe reorders the reads randomly so it would lose the benefit of running Clumpify. I guess I really need to add an "ordered" option to Dedupe; I'll try to do that next week.

        Comment


        • #5
          Originally posted by Brian Bushnell View Post
          In my tests, assembly with Spades and Megahit have time reductions from using Clumpified input that more than pays for the time needed to run Clumpify, largely because both are multi-kmer assemblers which read the input file multiple times. Something purely CPU-limited like mapping would normally not benefit much in terms of speed (though still a bit due to improved cache locality).
          In fact, Megahit does not read the input files multiple times. It converts the fastq/a files into a binary format and read the binary file multiple times. I guess that cache locality is the key. Imagine that the same group of k-mers are processed (in different components of Megahit -- graph construction: assign k-mers to different buckets then sorting; local assembly & extracting iterative k-mers: insert k-mers into a hash table)... In this regard, alignment tools may also benefit from it substantially.

          Great work Brian.

          Comment


          • #6
            If all data can fit in memory, Clumpify needs the amount of time it takes to read and write the file once. If the data cannot fit in memory, it takes around twice that long.
            Is there a way to force clumpify to use just memory (if enough is available) instead of writing to disk?

            Edit: On second thought that may not be practical/useful but I will leave the question in for now to see if @Brian has any pointers.

            For a 12G input gziped fastq file, clumpify made 28 temp files (each between 400-600M in size).

            Edit 2: Final file size was 6.8G so a significant reduction in size.
            Last edited by GenoMax; 12-06-2016, 12:38 PM.

            Comment


            • #7
              Any chances of including the fasta support for aminoacid sequences?

              Dear Brian,

              Thanks you very much for the tool, can be very helpful for io bound cloud folks.

              Are there any plans for including fasta support for aminoacid sequences
              (group the similar proteins together)?

              Must support very long fasta ID lines - up to 10Kb.

              Comment


              • #8
                Originally posted by vout View Post
                In fact, Megahit does not read the input files multiple times. It converts the fastq/a files into a binary format and read the binary file multiple times. I guess that cache locality is the key. Imagine that the same group of k-mers are processed (in different components of Megahit -- graph construction: assign k-mers to different buckets then sorting; local assembly & extracting iterative k-mers: insert k-mers into a hash table)... In this regard, alignment tools may also benefit from it substantially.

                Great work Brian.
                Well, you know what they say about assumptions! Thanks for that tidbit. For reference, here is a graph of the effect of Clumpify on Megahit times. I just happened to be testing Megahit and Clumpify at the same time, and this was the first time I noticed that Clumpify accelerated assembly; I wasn't really sure why, but assumed it was either due to cache locality or reading speed.



                Incidentally, Clumpify has an error-correction mode, but I was unable to get that to improve Megahit assemblies (even though it does improve Spades assemblies). Megahit has thus far been recalcitrant to my efforts to improve its assemblies with any form of error-correction, which I find somewhat upsetting In the above graph, "asm3" has the least pre-processing (no kmer-based error-correction) and so is the most reflective of the times we would get in practice; some of the other ones have low-depth reads discarded. And to clarify, the blue bars are the time for Megahit to assemble the non-clumpified reads, while the green bars are the times for Clumpified reads; in each case the input data is identical aside from read order. The assembly continuity stats were almost identical though not quite due to Megahit's non-determinisim, but the differences were trivial.

                Originally posted by Genomax
                Is there a way to force clumpify to use just memory (if enough is available) instead of writing to disk?

                Edit: On second thought that may not be practical/useful but I will leave the question in for now to see if @Brian has any pointers.

                For a 12G input gziped fastq file, clumpify made 28 temp files (each between 400-600M in size).
                Clumpify tests the size and compressibility at the beginning, and then *very conservatively* guesses how many temp files it needs based on projecting the memory use of the input (note that it is impossible to determine the decompressed size of a gzipped file without fully decompressing it, which takes too long). If it is confident everything can fit into memory with with a 250% safety margin then it will just use one group and not write any temp files. I had to make it very conservative to be safely used in production; sometimes there are weird degenerate cases with, say, length-1 reads or where everything is poly-A or poly-N that are super-compressible but use a lot of memory. You can manually force it to use one group with the flag "groups=1". With the "reorder" flag, a single group will compress better, since reorder does not work with multiple groups. Also, a single group is faster, so it's preferable. The only risk is running out of memory and crashing when forcing "groups=1".

                Originally posted by Markiyan
                Dear Brian,

                Thanks you very much for the tool, can be very helpful for io bound cloud folks.

                Are there any plans for including fasta support for aminoacid sequences
                (group the similar proteins together)?

                Must support very long fasta ID lines - up to 10Kb.
                There's no support for that planned, but nothing technically preventing it. However, Clumpify is not a universal compression utility - it will only increase compression when there is coverage depth (meaning, redundant information). So, for a big 10GB file of amino acid sequences - if they were all different proteins, there would not be redundant information, and they would not compress; on the other hand, if there were many copies of the same proteins from different but very closely-related organisms, or different isoforms of the same proteins scattered around randomly in the file, then Clumpify would group them together, which would increase compression.
                Attached Files
                Last edited by Brian Bushnell; 12-06-2016, 10:48 AM.

                Comment


                • #9
                  Originally posted by Brian Bushnell View Post
                  There's no support for that planned, but nothing technically preventing it. However, Clumpify is not a universal compression utility - it will only increase compression when there is coverage depth (meaning, redundant information). So, for a big 10GB file of amino acid sequences - if they were all different proteins, there would not be redundant information, and they would not compress; on the other hand, if there were many copies of the same proteins from different but very closely-related organisms, or different isoforms of the same proteins scattered around randomly in the file, then Clumpify would group them together, which would increase compression.
                  OK, so in order to cluster aminoacid sequences with current clumpify version it means:
                  1. parse fasta, reverse translate to DNA. Using a single codon for each aminoacid;
                  2. save as nt fastq;
                  3. clumpify;
                  4. parse fastq, translate;
                  5. save as aa fasta.

                  Comment


                  • #10
                    Originally posted by Markiyan View Post
                    OK, so in order to cluster aminoacid sequences with current clumpify version it means:
                    1. parse fasta, reverse translate to DNA. Using a single codon for each aminoacid;
                    2. save as nt fastq;
                    3. clumpify;
                    4. parse fastq, translate;
                    5. save as aa fasta.
                    Or you could just use CD-HIT.

                    Comment


                    • #11
                      Whether you use Clumpify or CD-Hit, I'd be very interested if you could post the file size results before and after.

                      Incidentally, you can use BBTools to do AA <-> NT translation like this:

                      Code:
                      translate6frames.sh in=proteins.faa.gz aain=t aaout=f out=nt.fna
                      clumpify.sh in=nt.fna out=clumped.fna
                      translate6frames.sh in=clumped.fna out=protein2.faa.gz frames=1 tag=f zl=6

                      Comment


                      • #12
                        I ran some benchmarks on 100x NextSeq E.coli data, to compare file sizes under various conditions:



                        This shows the file size, in bytes. Clumpified data is almost as small as mapped, sorted data, but takes much less time. The exact sizes were:
                        Code:
                        100x.fq.gz	360829483
                        clumped.fq.gz	251014934
                        That's a 30.4% reduction. Note that this was for NextSeq data without binned quality scores. When the quality scores are binned (as is the default for NextSeq) the increase in compression is even greater:

                        Code:
                        100x_binned.fq.gz	267955329
                        clumped_binned.fq.gz	161766626
                        ...a 39.6% reduction. I don't recommend quality-score binning, though Clumpify does have the option of doing so (with the quantize flag).



                        This is the script I used to generate these sizes and times:
                        Code:
                        time clumpify.sh in=100x.fq.gz out=clumped_noreorder.fq.gz
                        time clumpify.sh in=100x.fq.gz out=clumped.fq.gz reorder
                        time clumpify.sh in=100x.fq.gz out=clumped_lowram.fq.gz -Xmx1g
                        time clumpify.sh in=100x.fq.gz out=clumped.fq.bz2 reorder
                        time reformat.sh in=100x.fq.gz out=100x.fq.bz2
                        time bbmap.sh in=100x.fq.gz ref=ecoli_K12.fa.gz out=mapped.bam bs=bs.sh; time sh bs.sh
                        reformat.sh in=mapped_sorted.bam out=sorted.fq.gz zl=6
                        reformat.sh in=mapped_sorted.bam out=sorted.sam.gz zl=6
                        reformat.sh in=mapped_sorted.bam out=sorted.fq.bz2 zl=6
                        Attached Files

                        Comment


                        • #13
                          Interesting tool. Though I'd wish it could deal with "twin files", as these are the initial "raw files" of Illumina's bcl2fastq output. Additionally many tools require the pairs to be separated ... converting back and forth :-)

                          Comment


                          • #14
                            OK, I'll make a note of that... there's nothing preventing paired file support, it's just simpler to write for interleaved files when there are stages involving splitting into lots of temp files. But I can probably add it without too much difficulty.

                            Comment


                            • #15
                              Hello Brian,

                              I started to use clumpify and the size was on average reduced ~25% for NextSeq Arabidopsis data. Thanks for the development!

                              In a recent run for HiSeq maize data, I got an error for some (but not all) of the files. At first the run would stuck at fetching and eventually fail due to not enough memory (set 16G), despite the memory estimate was ~ 2G.

                              HTML Code:
                              Clumpify version 36.71
                              Memory Estimate:        2685 MB
                              Memory Available:       12836 MB
                              Set groups to 1
                              Executing clump.KmerSort [in=input.fastq.bz2, out=clumped.fastq.gz, groups=1, ecco=false, rename=false, shortname=f, unpair=false, repair=false, namesort=false, ow=true, -Xmx16g, reorder=t]
                              
                              Making comparator.
                              Made a comparator with k=31, seed=1, border=1, hashes=4
                              Starting cris 0.
                              Fetching reads.
                              Making fetch threads.
                              Starting threads.
                              Waiting for threads.
                              =>> job killed: mem job total 17312912 kb exceeded limit 16777216 kb
                              When I increased to 48G, the run was killed at making clumps and didn't have a specific reason,

                              HTML Code:
                              Starting threads.
                              Waiting for threads.
                              Fetch time:     321.985 seconds.
                              Closing input stream.
                              Combining thread output.
                              Combine time:   0.108 seconds.
                              Sorting.
                              Sort time:  33.708 seconds.
                              Making clumps.
                              /home/cc5544/bin/clumpify.sh: line 180: 45220 Killed
                              Do you know what may be the cause of this situation? Thank you.

                              Comment

                              Latest Articles

                              Collapse

                              • seqadmin
                                Recent Advances in Sequencing Technologies
                                by seqadmin







                                Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

                                Long-Read Sequencing
                                Long-read sequencing has...
                                12-02-2024, 01:49 PM
                              • seqadmin
                                Genetic Variation in Immunogenetics and Antibody Diversity
                                by seqadmin



                                The field of immunogenetics explores how genetic variations influence immune responses and susceptibility to disease. In a recent SEQanswers webinar, Oscar Rodriguez, Ph.D., Postdoctoral Researcher at the University of Louisville, and Ruben Martínez Barricarte, Ph.D., Assistant Professor of Medicine at Vanderbilt University, shared recent advancements in immunogenetics. This article discusses their research on genetic variation in antibody loci, antibody production processes,...
                                11-06-2024, 07:24 PM

                              ad_right_rmr

                              Collapse

                              News

                              Collapse

                              Topics Statistics Last Post
                              Started by seqadmin, 12-02-2024, 09:29 AM
                              0 responses
                              134 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 12-02-2024, 09:06 AM
                              0 responses
                              48 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 12-02-2024, 08:03 AM
                              0 responses
                              38 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 11-22-2024, 07:36 AM
                              0 responses
                              68 views
                              0 likes
                              Last Post seqadmin  
                              Working...
                              X