Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • martin2
    replied
    Originally posted by smg283 View Post
    This is during the "reading flowgrams" step when it is generating an output, it is the most time intensive step of my assemblies (these are eukaryote sized assemblies). There is both available CPU and memory, so it should be able to go faster, but something seems to be holding it back. The CPU usage is very low (~10% for each processor). The memory always tops out at ~ 20-21 gigs, even though that is only about a third of the available memory. With Celera assembler, you can change the memory limits during compiling the source, otherwise you are limited in your max memory usage, I didn't know if Newbler had similar limits.

    Try to force run only on a single CPU core. Is it newbler or gsRunProcessor which which into some of the log files that it will split the memory between the forked threads? I forgot ... And, there is also "-m" commandline switch to force in-memory computation, if my memory serves me right. ;-)

    Leave a comment:


  • nilshomer
    replied
    See Amdahl's Law.

    Leave a comment:


  • kmcarr
    replied
    Originally posted by smg283 View Post
    Is there some reason newbler wont use all the memory available ...?
    Yes, and it's the same reason any computer program may not use all available memory, because it does not need to. Newbler has loaded all the data it needs, and created all of it's required data structures and that all totals up to (in your case) 20-22 GB.

    There is both available CPU and memory, so it should be able to go faster, but something seems to be holding it back.
    Not all algorithms are perfectly parallelizable. Embarrassingly parallel problems can be split into completely independent threads, fully utilizing all available cpus. An example of this would be a BLAST search; each query sequence can be searched against the database independently of all other queries. Tightly coupled problems can not be completely separated; one part of the problem may depend on the result of some other part, meaning it can not start until the first part of the problem is finished. If there are not enough independent parts of the problem able to run concurrently then some of your cpus will be idle. This is just a fact of life in computer science. Genome assembly is a complex problem with many stages. Some of these are more easily parallelized than others. Next time you run Newbler look at the cpu usage during the "Detangling alignments" step. This process is essentially single threaded; one cpu will be utilized at 100% while all the rest sit idle.

    None of your observations sound unusual or unexpected to me for a program like Newbler.

    Leave a comment:


  • smg283
    replied
    This is during the "reading flowgrams" step when it is generating an output, it is the most time intensive step of my assemblies (these are eukaryote sized assemblies). There is both available CPU and memory, so it should be able to go faster, but something seems to be holding it back. The CPU usage is very low (~10% for each processor). The memory always tops out at ~ 20-21 gigs, even though that is only about a third of the available memory. With Celera assembler, you can change the memory limits during compiling the source, otherwise you are limited in your max memory usage, I didn't know if Newbler had similar limits.

    Leave a comment:


  • kmcarr
    replied
    Why do you think Newbler needs more than 22 GB of RAM? Is it performing a lot of swap operations? In my experience Newbler is fairly memory efficient.

    Leave a comment:


  • smg283
    started a topic Memory Usage in Newbler 2.3

    Memory Usage in Newbler 2.3

    I am using Newbler 2.3 on a 8 core workstation with 64 gb memory (running Ubuntu). It seems that during my newbler assemblies, memory usage is limited to ~20 gb. Is there some reason newbler wont use all the memory available (the processors usage is not limiting, they could be running at 10% and still memory usage wont go over ~22 gb)?

Latest Articles

Collapse

  • seqadmin
    Current Approaches to Protein Sequencing
    by seqadmin


    Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
    04-04-2024, 04:25 PM
  • seqadmin
    Strategies for Sequencing Challenging Samples
    by seqadmin


    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
    03-22-2024, 06:39 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 04-11-2024, 12:08 PM
0 responses
27 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 10:19 PM
0 responses
31 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 09:21 AM
0 responses
27 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-04-2024, 09:00 AM
0 responses
52 views
0 likes
Last Post seqadmin  
Working...
X