Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • multithreaded jobs

    I got a question from our cluster IT that I could not answer. I am having problems runing BWA, samtools or blastx in the multithreaded mode on the cluster. Cluster resorces are normally reserved by_core, that is multithreaded jobs are spread across multiple nodes. BWA, samtools and blast+ run fine on my 2 Quad Core CPU with hyperthreading, that is with 16 threads. The question was - can actually BWA, samtools and blastx+ run with multiple threads when spread across several nodes? If not, this answers the question. If yes, are there any specifics/pecularities in scheduling resources?

  • #2
    Depending on what scheduling software your cluster uses you can specify multiple threads along with an equivalent number of cores (-n in LSF) to go with them.

    That said you would want to be judicious when spreading jobs across cores/nodes. Depending on what else may be running on a particular server/node you may overload that server/node (which your admins may not like). There may also be some I/O problems if you do not have a high performance storage system that is available on the cluster and your jobs get spread across several nodes.

    In case of LSF you can use flags (-x) that can exclusively reserve a node for your use with multiple cores which would ensure you do not step on someone else' jobs. There are probably equivalent examples for SGE.

    Comment


    • #3
      Definitely need to find out what your scheduling software is. But in general, I prefer parallelization to multithreading. It is much easier and I find that I run into fewer problems.

      Comment


      • #4
        Much better than running a single multithreaded instance of blast over many nodes is to split your input (e.g. with fastasplitn) and then run multiple instances of blast so that the maximum number of threads of each instance equals the number of cores of one CPU. With SGE this is achieved by using the smp parallel environment. You can make a simple bash script, e.g.:

        Code:
        cat blastp.sh
        
        #!/bin/bash
        #$ -N blastp
        #$ -j y
        #$ -cwd
        #$ -pe smp 8
        #$ -R y
        blastp -query input.${SGE_TASK_ID} -db nr -lotsOfFlags -outfmt 6 -num_threads 8 -out ${SGE_TASK_ID}.tsv
        And call it:

        qsub -t 1-10:1 blastp.sh

        To start/queue 10 instances of blastp (each with 8 threads). Input for this would be input.1, input.2, .., input.10 and you could just concatenate the results with cat..
        Last edited by rhinoceros; 04-29-2013, 12:36 PM.
        savetherhino.org

        Comment


        • #5
          Originally posted by rhinoceros View Post
          Much better than running a single multithreaded instance of blast over many nodes is to split your input (e.g. with fastasplitn) and then run multiple instances of blast so that the maximum number of threads of each instance equals the number of cores of one CPU. With SGE this is achieved by using the smp parallel environment. You can make a simple bash script, e.g.:

          Code:
          cat blastp.sh
          
          #!/bin/bash
          #$ -N blastp
          #$ -j y
          #$ -cwd
          #$ -pe smp 8
          #$ -R y
          blastp -query input.${SGE_TASK_ID} -db nr -lotsOfFlags -outfmt 6 -num_threads 8 -out ${SGE_TASK_ID}.tsv
          And call it:

          qsub -t 1-10:1 blastp.sh

          To start/queue 10 instances of blastp (each with 8 threads). Input for this would be input.1, input.2, .., input.10 and you could just concatenate the results with cat..
          Yep I thought about that, but I was not sure that it is more or as efficient as multithreading. blasp is legacy blast, is not it? If so, one has no choice, as multithreading was intriduced with blast+, was not it? Anyway, good to know it works well, thanks!

          Comment


          • #6
            Originally posted by Khen View Post
            Definitely need to find out what your scheduling software is. But in general, I prefer parallelization to multithreading. It is much easier and I find that I run into fewer problems.
            Well, that is a separate interesting question. GNU parallel is one way to go, but I was not yet quite able figuring out how to run it on a cluster. I am not a programmer, I mean I do not have sufficent exerience yet, so how would you request resources for GNU parallel? Unless you parallelize in some other way...

            Comment


            • #7
              Originally posted by yaximik View Post
              Well, that is a separate interesting question. GNU parallel is one way to go, but I was not yet quite able figuring out how to run it on a cluster. I am not a programmer, I mean I do not have sufficent exerience yet, so how would you request resources for GNU parallel? Unless you parallelize in some other way...
              Can we stick to the original question you had posted? What scheduling software is the cluster you have access to running? On a cluster you are going to get the most benefit by using the job scheduling system.

              Do I sense some resistance on your side to give up on your dedicated workstation (where you are in control)? In the long run using a dedicated cluster would allow you to get much more done.
              Last edited by GenoMax; 04-30-2013, 03:35 AM.

              Comment


              • #8
                GNU parallel is not designed for clusters. LSF and SGE/UGE are.

                Comment


                • #9
                  Originally posted by GenoMax View Post
                  Can we stick to the original question you had posted? What scheduling software is the cluster you have access to running? On a cluster you are going to get the most benefit by using the job scheduling system.
                  Oh, that was not intention to swing away, I was just not sure how parallelization was achived. Our grid (I call it cluster incorrectly) uses SGE. Our documentation is very sketchy, so I have to browse a lot of docs posted for other grids or ask silly questions if I cannot find answer. Also, local implementation differ, so if I happen to find a seemengly usable script it often misunderstood by our scheduler.

                  Do I sense some resistance on your side to give up on your dedicated workstation (where you are in control)? In the long run using a dedicated cluster would allow you to get much more done.
                  I use my server to test multithreaded jobs, so I won't alienate IT by crashing nodes on the grid. I certainly would prefer to get a dedicated cluster equipped with a bunch of GPUs, but it just takes time to get funds for that. At current 8% payline at NIH it is quite challenging. So I have now only my server and the local SGE grid at my disposal.

                  Comment


                  • #10
                    Originally posted by lh3 View Post
                    GNU parallel is not designed for clusters. LSF and SGE/UGE are.
                    Looks like I may have hit a few wrong buttons in message window. if so I apologize. So GNU parallel can be used with SGE? I asked this question in separate thread earlier but got no answers.

                    Comment


                    • #11
                      Parallel simply launches multiple jobs on one machine. In principle, you can launch parallel inside qsub/bsub. On LSF, it should be something like: bsub -n 8 'parallel -j 8 ...'. However, with LSF/SGE, this is a bad idea. You should just let LSF/SGE manage all your jobs. Parallel has little use except for constructing command lines. As GenoMax has suggested, just use SGE. You won't crash nodes by submitting multithreaded jobs in a small batch.

                      Common use of SGE (such as specifying #cpus) works everywhere. If your system admins set resource limits on queues, ask them. We can offer little help.

                      For most NGS analyses, investing on GPU is a waste of money before you really understand how that works and/or what programs are available.

                      Comment


                      • #12
                        So we now know that you are going to use SGE.

                        Can you take a moment and clarify if you are using a true "grid" or a compute "cluster". Here is a posting that would help with the definitions.

                        If you are truly using a "grid", meaning compute resources that are connected over a non-local network, then trying to split/multi-thread jobs would be a bad idea.

                        Comment


                        • #13
                          Originally posted by GenoMax View Post
                          So we now know that you are going to use SGE.

                          Can you take a moment and clarify if you are using a true "grid" or a compute "cluster". Here is a posting that would help with the definitions.

                          If you are truly using a "grid", meaning compute resources that are connected over a non-local network, then trying to split/multi-thread jobs would be a bad idea.
                          From that definitions it is a grid. It is likely located somewhere on the campus, composed of nodes with at least three different architectures and connceted by InfiniBand network. AT least they officially call it a grid.

                          It looks like I already have learned the answer in a hard way. My last 64 and 96 thread blastx jobs died with the following error:

                          Code:
                          A daemon (pid 19390) died unexpectedly with status 137 while attempting
                          to launch so we are aborting.
                          There may be more information reported by the environment (see above).
                          This may be because the daemon was unable to find all the needed shared
                          libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
                          location of the shared libraries on the remote nodes and this will
                          automatically be forwarded to the remote nodes.
                          
                          mpirun noticed that the job aborted, but has no info as to the process
                          that caused that situation.
                          The scheduling script is below:
                          Code:
                          #$ -S /bin/bash
                          #$ -cwd
                          #$ -N SC3blastx_64-96thr
                          #$ -pe openmpi* 64-96
                          #$ -l h_rt=24:00:00,vf=3G
                          #$ -j y
                          #$ -M [email protected]
                          #$ -m eas
                          #
                          # Load the appropriate module files
                          # Should be loaded already
                          #$ -V
                          
                          mpirun -np $NSLOTS blastx -query myquery.fasta -db nr -out query.out -evalue 0.001 -max_intron_length 100000 -outfmt 5 -num_alignments 20 -lcase_masking -num_threads $NSLOTS
                          I asked grid IT and they said they had to kill it as the job was overloading nodes. They saw loads up to 180 instead of close to 12 on 12-core nodes. They think that blastx is not an openmpi application, so openMPI is spawning between 64-96 blastx processes, each of which is then starting up 96 worker threads. Or if blastx can work with openmpi, my blastx synthax mpirun syntax is wrong. Is this correct?

                          I was advised earlier by someone from openmpi user group to use –pe openmpi [ARG} , where ARG = number_of_processes x number_of_threads , and then pass desired number of threads as ‘ mpirun –np $NSLOTS cpus-per-proc [ number_of_threads]’. When I did that, I got an error that more threads were requested than number of physical cores.

                          Oh,well... Looks like splitting large datafiles to thousands of smaller pieces is the only option to use blastx. Any other suggestions?

                          Comment


                          • #14
                            As rhinoceros has suggested, you should split your input files into small batches. Blast+ is not aware of MPI. There is a mpiblast, but as I remember, it is not blast+.

                            Comment


                            • #15
                              Here are two useful pages:

                              Imperial College London is a world-class university with a mission to benefit society through excellence in science, engineering, medicine and business.

                              Imperial College London is a world-class university with a mission to benefit society through excellence in science, engineering, medicine and business.

                              Comment

                              Latest Articles

                              Collapse

                              • seqadmin
                                Best Practices for Single-Cell Sequencing Analysis
                                by seqadmin



                                While isolating and preparing single cells for sequencing was historically the bottleneck, recent technological advancements have shifted the challenge to data analysis. This highlights the rapidly evolving nature of single-cell sequencing. The inherent complexity of single-cell analysis has intensified with the surge in data volume and the incorporation of diverse and more complex datasets. This article explores the challenges in analysis, examines common pitfalls, offers...
                                06-06-2024, 07:15 AM
                              • seqadmin
                                Latest Developments in Precision Medicine
                                by seqadmin



                                Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

                                Somatic Genomics
                                “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
                                05-24-2024, 01:16 PM

                              ad_right_rmr

                              Collapse

                              News

                              Collapse

                              Topics Statistics Last Post
                              Started by seqadmin, Today, 07:49 AM
                              0 responses
                              12 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, Yesterday, 07:23 AM
                              0 responses
                              14 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 06-17-2024, 06:54 AM
                              0 responses
                              16 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 06-14-2024, 07:24 AM
                              0 responses
                              24 views
                              0 likes
                              Last Post seqadmin  
                              Working...
                              X