Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Diegodescarpates
    replied
    Originally posted by krobison View Post
    In addition to digital normalization, you might try the Minia assembler, which is intended to be very memory efficient.



    Also, Amazon EC2 is quite cheap as a source of compute power. You should be able to assemble this on EC2 with Ray for <10 euros -- one Quad Extra Large High Memory instance can devour much larger datasets in an hour or so.
    Thanks for information. I am trying Minia...

    Best regards,

    Diego
    Last edited by Diegodescarpates; 02-05-2013, 10:26 AM.

    Leave a comment:


  • Diegodescarpates
    replied
    Thanks for your reply winsettz.

    I have access to a working draft sequence (11x coverage), that's why I would prefer de novo assembly.
    I am testing Gossamer so I can't answer now for memory footprint assembly but if I remember clearly he didn't exceed 8 GB RAM with 36 bp. With the second dataset the memory footprint was 8G (RAM) plus 5G of SWAP.

    Thanks for the link, I am going to get into it.

    Best regards,

    Diego
    Last edited by Diegodescarpates; 02-05-2013, 09:28 AM.

    Leave a comment:


  • krobison
    replied
    In addition to digital normalization, you might try the Minia assembler, which is intended to be very memory efficient.



    Also, Amazon EC2 is quite cheap as a source of compute power. You should be able to assemble this on EC2 with Ray for <10 euros -- one Quad Extra Large High Memory instance can devour much larger datasets in an hour or so.

    Leave a comment:


  • winsettz
    replied
    What backbone would you have access to?

    8 GB is going to be tough. My typical workflows to lower memory usage are to use longer kmer words and velvet's -create_binary (but I am using MiSeq), but I'm not sure if that will work with an 8G limit. What is your memory footprint assembling only the 36 bp paired ends?

    Edit: C. Titus Brown has some workflows which may help. They are intended for very large metagenomic assemblies, but may be useful.

    Last edited by winsettz; 02-05-2013, 08:46 AM.

    Leave a comment:


  • What is the best and RAM efficient pipeline for de novo assembly of...

    Hello everyone,

    What is the best and RAM efficient pipeline for de novo assembly with two datasets including about 6 millions illumina paired-end reads of 36 bp and 44 millions illumina paired-end reads of 100 bp. It's a bacterian genome of 6.5 Mbp.

    I tried Velvet, Abyss, SOAPdenovo... but it's seems impossible with 8G RAM.

    Map Illumina reads to a backbone, is it a solution ? In this case, which pipeline ?

    Thanks in advance for your help.

    Diego

Latest Articles

Collapse

  • seqadmin
    Genetic Variation in Immunogenetics and Antibody Diversity
    by seqadmin



    The field of immunogenetics explores how genetic variations influence immune responses and susceptibility to disease. In a recent SEQanswers webinar, Oscar Rodriguez, Ph.D., Postdoctoral Researcher at the University of Louisville, and Ruben Martínez Barricarte, Ph.D., Assistant Professor of Medicine at Vanderbilt University, shared recent advancements in immunogenetics. This article discusses their research on genetic variation in antibody loci, antibody production processes,...
    11-06-2024, 07:24 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 11-22-2024, 07:36 AM
0 responses
60 views
0 likes
Last Post seqadmin  
Started by seqadmin, 11-22-2024, 07:04 AM
0 responses
80 views
0 likes
Last Post seqadmin  
Started by seqadmin, 11-21-2024, 09:19 AM
0 responses
76 views
0 likes
Last Post seqadmin  
Started by seqadmin, 11-08-2024, 11:09 AM
0 responses
320 views
0 likes
Last Post seqadmin  
Working...
X