Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • martin2
    replied
    Originally posted by smg283 View Post
    This is during the "reading flowgrams" step when it is generating an output, it is the most time intensive step of my assemblies (these are eukaryote sized assemblies). There is both available CPU and memory, so it should be able to go faster, but something seems to be holding it back. The CPU usage is very low (~10% for each processor). The memory always tops out at ~ 20-21 gigs, even though that is only about a third of the available memory. With Celera assembler, you can change the memory limits during compiling the source, otherwise you are limited in your max memory usage, I didn't know if Newbler had similar limits.

    Try to force run only on a single CPU core. Is it newbler or gsRunProcessor which which into some of the log files that it will split the memory between the forked threads? I forgot ... And, there is also "-m" commandline switch to force in-memory computation, if my memory serves me right. ;-)

    Leave a comment:


  • nilshomer
    replied
    See Amdahl's Law.

    Leave a comment:


  • kmcarr
    replied
    Originally posted by smg283 View Post
    Is there some reason newbler wont use all the memory available ...?
    Yes, and it's the same reason any computer program may not use all available memory, because it does not need to. Newbler has loaded all the data it needs, and created all of it's required data structures and that all totals up to (in your case) 20-22 GB.

    There is both available CPU and memory, so it should be able to go faster, but something seems to be holding it back.
    Not all algorithms are perfectly parallelizable. Embarrassingly parallel problems can be split into completely independent threads, fully utilizing all available cpus. An example of this would be a BLAST search; each query sequence can be searched against the database independently of all other queries. Tightly coupled problems can not be completely separated; one part of the problem may depend on the result of some other part, meaning it can not start until the first part of the problem is finished. If there are not enough independent parts of the problem able to run concurrently then some of your cpus will be idle. This is just a fact of life in computer science. Genome assembly is a complex problem with many stages. Some of these are more easily parallelized than others. Next time you run Newbler look at the cpu usage during the "Detangling alignments" step. This process is essentially single threaded; one cpu will be utilized at 100% while all the rest sit idle.

    None of your observations sound unusual or unexpected to me for a program like Newbler.

    Leave a comment:


  • smg283
    replied
    This is during the "reading flowgrams" step when it is generating an output, it is the most time intensive step of my assemblies (these are eukaryote sized assemblies). There is both available CPU and memory, so it should be able to go faster, but something seems to be holding it back. The CPU usage is very low (~10% for each processor). The memory always tops out at ~ 20-21 gigs, even though that is only about a third of the available memory. With Celera assembler, you can change the memory limits during compiling the source, otherwise you are limited in your max memory usage, I didn't know if Newbler had similar limits.

    Leave a comment:


  • kmcarr
    replied
    Why do you think Newbler needs more than 22 GB of RAM? Is it performing a lot of swap operations? In my experience Newbler is fairly memory efficient.

    Leave a comment:


  • smg283
    started a topic Memory Usage in Newbler 2.3

    Memory Usage in Newbler 2.3

    I am using Newbler 2.3 on a 8 core workstation with 64 gb memory (running Ubuntu). It seems that during my newbler assemblies, memory usage is limited to ~20 gb. Is there some reason newbler wont use all the memory available (the processors usage is not limiting, they could be running at 10% and still memory usage wont go over ~22 gb)?

Latest Articles

Collapse

  • seqadmin
    Exploring the Dynamics of the Tumor Microenvironment
    by seqadmin




    The complexity of cancer is clearly demonstrated in the diverse ecosystem of the tumor microenvironment (TME). The TME is made up of numerous cell types and its development begins with the changes that happen during oncogenesis. “Genomic mutations, copy number changes, epigenetic alterations, and alternative gene expression occur to varying degrees within the affected tumor cells,” explained Andrea O’Hara, Ph.D., Strategic Technical Specialist at Azenta. “As...
    07-08-2024, 03:19 PM
  • seqadmin
    Exploring Human Diversity Through Large-Scale Omics
    by seqadmin


    In 2003, researchers from the Human Genome Project (HGP) announced the most comprehensive genome to date1. Although the genome wasn’t fully completed until nearly 20 years later2, numerous large-scale projects, such as the International HapMap Project and 1000 Genomes Project, continued the HGP's work, capturing extensive variation and genomic diversity within humans. Recently, newer initiatives have significantly increased in scale and expanded beyond genomics, offering a more detailed...
    06-25-2024, 06:43 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 07-16-2024, 05:49 AM
0 responses
24 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-15-2024, 06:53 AM
0 responses
31 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-10-2024, 07:30 AM
0 responses
40 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-03-2024, 09:45 AM
0 responses
205 views
0 likes
Last Post seqadmin  
Working...
X