Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ctseto
    replied
    Haven't seen any as of late 2018, and I've been looking after getting back into de novo assembly...
    Always possible that as someone getting back in that I've missed something.

    Edit: I see megahit can use a GPU for graph construction
    Last edited by ctseto; 12-06-2018, 07:14 AM.

    Leave a comment:


  • mchaisso
    replied
    Originally posted by davispeter View Post
    hmmmm. Thanks for reply. But what about the Euler approach. In the paper (that I mentioned) the Euler approach is implemented parallely. Does that mean Euler approach is not good?
    Unfortunately, under wrote definition of an Eulerian tour, finding a full Eulerian tour is meaningless when there are errors in sequences, and repeats longer than k. The de Bruijn based assemblers output contigs that represent unambiguous (and sequencing error-free) paths taken on the traversal of the de Bruijn graph. The original power of the de Bruijn approach was an efficient encoding of overlaps of very short (30nt) reads.

    -mark

    Leave a comment:


  • lh3
    replied
    I should clarify that I am not closing the possibility of a good GPU assembler, either. I am just saying that we have not reached there yet. I also agree GPU based aligners are impressive works.

    Leave a comment:


  • samanta
    replied
    Here are the links.

    Few days back, a reader asked us in Twitter, whether de Bruijn graph-based assemblers could save and reload de Bruijn graphs from one another. Short Twitter answer was no. Long answer follows here.


    We have been going through various web-based resources on high-quality hash functions and made a startling discovery. None of the good websites was maintained by the members of computer science departments at top universities or even second-rank universities. Based on our highly anecdotal evidence, computer science professors stopped thinking about hash functions many decades back. That seemed puzzling, because in the world where it matters, research on hash function still attracts big money.

    Leave a comment:


  • Aqua
    replied
    Thanks Samanta and lh3. I'm not closing the possibility of implementing an ultra-fast assembler on GPU, but the reduction nature of genome assembly problem constrained it from scaling well on GPU. For the latest GPU model nVidia GTX Titan, which has 2600+ cores but only ~300G memory bandwidth, every core will only get ~100MB/s memory bandwidth, not mentioning the optimal can only be achieve by coalesced memory access, which is almost impossible to be fulfilled no matter using DBG, String Graph or Greedy. Another problem is that GPU has only limited amount of on-board memory (3G-12G), swapping between host memory and GPU memory is possible but ultimately slow.

    Differently, the problem of alignment is mainly "mapping" problem in MapReduce scheme, which makes it suitable for GPU or other HPC accelerator like FPGA and MIC. Plenty investigations have been done: "SOAP3-dp" (http://arxiv.org/abs/1302.5507) and "CUSHAW2-GPU" (http://cushaw2.sourceforge.net/homepage.htm#latest) has achieved more than 10x acceleration to CPU aligners, the most important, much higher sensitivity and accuracy in opening large gaps provide much more computational power.

    BTW, frankly speaking, CPU assemblers, say SOAPdenovo2 and ALLPATH-LG, still have a large space to be improved. Samanta has a very good discussion on the hash function used in assemblers (http://homolog.us/blogs). A question is that, why we have to use standard, general hash functions in assembler? The only feature assemblers require the hash functions to have is the evenness, why shall we care that much about avalanche test.

    Leave a comment:


  • lh3
    replied
    According to the tech report, 90% of total time goes to "I/O". If I understand correctly, this "I/O" phase, unusually, includes k-mer counting and is done purely with CPU. K-mer counting is one of the slowest and most memory hungry steps in the construction of de Bruijn graph. If we cannot parallelize this step with GPU, we will not get much speed up.

    In addition, the reported assembly speed is slower than what I would expect with velvet. I think velvet can usually get the results in a minute or so given 20X error-free data for a ~2Mbp genome. That is in par with GPU-Euler.

    In all, I think the tech report does not prove that a GPU-based de Bruijn assembler is much better than CPU-based ones.

    Leave a comment:


  • davispeter
    replied
    hmmmm. Thanks for reply. But what about the Euler approach. In the paper (that I mentioned) the Euler approach is implemented parallely. Does that mean Euler approach is not good?

    Leave a comment:


  • samanta
    replied
    I would say GPU is a no-go for genome assembly. We looked at various options for doing genome assembly in GPUs last year, and could not make the algorithms scale well. Genome assembly programs need very large memory bandwidth, and it is not possible to scale the programs well in the GPUs, whose greatest benefit is access to many 'parallel' processors. Late last year, I attended BGI's booth at HPC conference (Salt Lake City) and saw a number of GPU solutions being presented for various bioinformatics problems, but the genome assembly program did not seem to give any performance boost. At present our group is working on implementing a genome assembler in FPGA, where we can get the performance boost.

    I will forward your question to BGI's Ruibang, who can probably shed more light on the current status.

    Leave a comment:


  • davispeter
    started a topic DNA assembly on GPU

    DNA assembly on GPU

    I was looking for DNA assemblers that work on GPU. I found only this paper http://www.cs.gmu.edu/~tr-admin/pape...-TR-2011-1.pdf - GPU Euler. But I was not satisfied with the concept and results provided in the paper. It works by finding the whole Euler tour without any graph transformation and error correction, still it is getting results comparable to well established assemblers like Euler SR. Like max-length of 40,000 , N-50 value 8000, etc. No other assembler works by finding the whole Euler tour then how this paper is mentioning such good results. Does anyone have read or worked with this paper?

Latest Articles

Collapse

  • seqadmin
    Recent Advances in Sequencing Technologies
    by seqadmin



    Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

    Long-Read Sequencing
    Long-read sequencing has seen remarkable advancements,...
    12-02-2024, 01:49 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 12-02-2024, 09:29 AM
0 responses
158 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-02-2024, 09:06 AM
0 responses
57 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-02-2024, 08:03 AM
0 responses
48 views
0 likes
Last Post seqadmin  
Started by seqadmin, 11-22-2024, 07:36 AM
0 responses
76 views
0 likes
Last Post seqadmin  
Working...
X