Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Can you check output of:

    Code:
    $ ldd bison_heard
    This should list all the linked libraries. The path for the libmpi.so.0 should be listed. Make sure that you have that path in your LD_LIBRARY_PATH. Also the other things to check would be that there are no multiple libmpi.so.0 files in the LIBRARY_PATH. If there are you should probably only retain the one from mpich2.

    Comment


    • #17
      Originally posted by mvijayen View Post
      @dpryan: thank you for taking a look at my data.

      Now, it appears that Bison does not work with openmpi_1.4.1?? I am getting the following error:
      You're MPI implementation doesn't support MPI_THREAD_MULTIPLE, which is required for bison_herd to work.
      --------------------------------------------------------------------------
      mpiexec has exited due to process rank 0 with PID 22022 on
      node helium-login-0-2.local exiting without calling "finalize". This may
      have caused other processes in the application to be
      terminated by signals sent by mpiexec (as reported here).
      --------------------------------------------------------------------------
      Depending on the version of openmpi and how it was compiled MPI_THREAD_MULTIPLE (this just means support for multiple threads on a single computer/node sending or receiving data) isn't always supported. This was probably a compile time option for openmpi. You can just use "bison" instead of "bison_herd", since the former only needs MPI_THREAD_FUNNELED support, which is likely there. The command syntax for bison is effectively the same and it'll work equally well for your dataset.

      Secondly, when I try running it after installing mpich2,this is the error I am getting:
      Command line:
      mpiexec -n 5 bison_herd -g /Users/mvijayen/bison/make_install/ref.fa -o /Users/mvijayen/bison/output/ -1 /Users/mvijayen/seq_data/sample_1.fastq -2 /Users/mvijayen/seq_data/sample_2.fastq
      Error message:
      ./bison_herd: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
      Unrelated to your problem, you actually need to index things first. So
      Code:
      bison_index /Users/mvijayen/bison/make_install/ref.fa
      mpiexec -n 5 bison -g /Users/mvijayen/bison/make_install/ -o /Users/mvijayen/bison/output/ -1 /Users/mvijayen/seq_data/sample_1.fastq -2 /Users/mvijayen/seq_data/sample_2.fastq
      Keep in mind that you need to compile bison with "-lmpich -lmpl" rather than "-lmpi" for it to work with mpich2 rather than openMPI (I assume this is the case for all programs using MPI). The names of the libraries are different between mpich2 and openmpi (I haven't a clue why), which I suspect might be leading to the problem you're seeing.

      Comment


      • #18
        Hi! So here are some update/errors that I am still coming across:
        $ ldd bison_herd
        gives me an output with libmpi.so.0 not being found.Since I was experiencing this error from using the locally installed mpich2, I decided to use the version installed in my computing cluster.

        So as dpryan suggested, I compiled bison with "-lmpich -lmpl" to work with mpich2 and here is the error:
        module load mvapich2_gnu_1.9a
        make
        Error:

        mpicc -c -Wall -O3 -I/Users/mvijayen/samtools-0.1.19 main.c -o main.o
        mpicc -Wall -O3 aux.o fastq.o genome.o slurp.o master.o common.o MPI_packing.o worker.o main.o -o bison -L/Users/mvijayen/samtools-0.1.19 -lm -lpthread -lmpich -lmpl -lbam -lz
        aux.o: In function `quit':
        aux.c: (.text+0xcf2): undefined reference to `ompi_mpi_comm_world'
        aux.o: In function `effective_nodes':
        aux.c: (.text+0xef5): undefined reference to `ompi_mpi_comm_world'
        slurp.o: In function `slurp':
        slurp.c: (.text+0x1f9): undefined reference to `ompi_mpi_comm_world'
        slurp.c: (.text+0x1fe): undefined reference to `ompi_mpi_int'
        slurp.c: (.text+0x23d): undefined reference to `ompi_mpi_comm_world'
        slurp.c: (.text+0x24d): undefined reference to `ompi_mpi_int'
        slurp.c: (.text+0x277): undefined reference to `ompi_mpi_comm_world'
        slurp.c: (.text+0x287): undefined reference to `ompi_mpi_byte'
        slurp.c: (.text+0x347): undefined reference to `ompi_mpi_byte'
        slurp.c: (.text+0x34d): undefined reference to `ompi_mpi_comm_world'
        slurp.c: (.text+0x3a6): undefined reference to `ompi_mpi_comm_world'
        slurp.c: (.text+0x3c3): undefined reference to `ompi_mpi_byte'
        slurp.c: (.text+0x3ff): undefined reference to `ompi_mpi_byte'
        slurp.c: (.text+0x405): undefined reference to `ompi_mpi_comm_world'
        worker.o: In function `worker_node':
        worker.c: (.text+0x1c2): undefined reference to `ompi_mpi_comm_world'
        worker.c: (.text+0x1cc): undefined reference to `ompi_mpi_int'
        worker.c: (.text+0x390): undefined reference to `ompi_mpi_comm_world'
        worker.c: (.text+0x39b): undefined reference to `ompi_mpi_byte'
        worker.c: (.text+0x3dd): undefined reference to `ompi_mpi_comm_world'
        worker.c: (.text+0x3ea): undefined reference to `ompi_mpi_byte'
        worker.c: (.text+0x453): undefined reference to `ompi_mpi_comm_world'
        worker.c: (.text+0x45e): undefined reference to `ompi_mpi_byte'
        worker.c: (.text+0x5be): undefined reference to `ompi_mpi_comm_world'
        worker.c: (.text+0x5c9): undefined reference to `ompi_mpi_int'
        worker.c: (.text+0x5e3): undefined reference to `ompi_mpi_comm_world'
        worker.c: (.text+0x5ee): undefined reference to `ompi_mpi_byte'
        collect2: ld returned 1 exit status
        make: *** [align] Error 1

        From the bison seqanswers page (http://seqanswers.com/forums/archive...p/t-31314.html) it seems like this problem was solved when mpich was replaced with openmpi. But what if I would like to use mpich2?

        So I thought I shall try with openmpi. Although I am able to compile bison with no issues with openmpi (of course after replacing with "-lmpi"), the version in my computing cluster does not support multiple threading (which is fine since I can use bison instead of bison_herd). However, here is the error I am seeing when using bison:
        mpiexec -n 5 bison -g /Users/mvijayen/bison/make_install/ -o /Users/mvijayen/bison/output/ -1 /Users/mvijayen/seq_data/sample_1.fastq -2 /Users/mvijayen/seq_data/sample_2.fastq
        bison: invalid option -- 1
        Try `bison --help' for more information.
        bison: invalid option -- 1
        Try `bison --help' for more information.
        bison: invalid option -- 1
        Try `bison --help' for more information.
        bison: invalid option -- 1
        Try `bison --help' for more information.
        bison: invalid option -- 1
        Try `bison --help' for more information.
        --------------------------------------------------------------------------
        mpiexec noticed that the job aborted, but has no info as to the process
        that caused that situation.
        --------------------------------------------------------------------------

        bison --help does not yield any further info besides arguments for bison (and -1 and -2) were not there. Also, I did index prior to running bison and I am using version 2.3.

        Comment


        • #19
          It looks like you need to run "make clean" before running "make" again. From the output, I'm guessing the the object (*.o) files were still around from when you compiled with openmpi. But you're trying to use mpich2 now, so you end up trying to mix the two, which doesn't work.

          For what it's worth, I use mpich2 on our cluster and openmpi on my workstation, so bison should work properly with both.

          The "invalid option ..." stuff is actually generated by your MPI installation. I'm guessing that you still have a bit of mpich2 stuff sitting around on your local machine.

          Comment


          • #20
            For what it's worth, I'm installing openmpi-1.4.1 on my workstation for testing purposes. I've never tested on something that old (4 years is ancient), so I'll update things if needed.

            Comment


            • #21
              I've installed and tested things with openmpi-1.4.1 (I compiled with --enable-progress-threads to allow MPI_THREAD_FUNNELED but not MPI_THREAD_MULTIPLE) and everything works fine. I suppose I could just upload a binary for you to use, if that'll help.

              Comment


              • #22
                I think part of the confusion may have been because of mvijayen not doing a "make clean" between compiles , as you had suggested.

                mvijayen is using "modules" on local cluster so as long as the right module is sourced there should be no problems using either Mpich2 or OpenMPI.

                A binary may not work (unless you statically links all libraries) since the cluster configuration in mvijayen's case would not be identical to your's.

                Comment


                • #23
                  Good point.

                  Comment


                  • #24
                    Update thus far and more errors:
                    I tried "make clean" before compiling and that did not appear to fix the problem. I think too make trials switching between mpich2 and openmpi did not do my make_install folder any good. Therefore, I started all over again, beginning with unpacking and this time being cautious to first unload all modules and only loading mpich2 as shown here:
                    Currently Loaded Modulefiles:
                    1) modules 6) git_1.7.1
                    2) use.own 7) python_2.7/(python27)
                    3) python_2.6.4/(python26) 8) limic2_0.5.5
                    4) intel_11.1.072/(intel) 9) mvapich2_gnu_1.9a/(mvapich2_gnu)
                    5) mkl_10.2.5.035/(mkl)

                    Then the compilation worked with mpich2 (of course the Makefile was modified to work with mpich2). Indexing worked just fine too. Since threading is not an issue with mpich2 for me versus when using openmpi, I ran the command line for bison_herd:
                    mpiexec -env MV2_ENABLE_AFFINITY=0 -n 5 /Users/mvijayen/bison/make_install/bison_herd -g /Users/mvijayen/bison/make_install/ref.fa -o /Users/mvijayen/bison/output/ -1 /Users/mvijayen/seq_data/sample_1.fastq -2 /Users/mvijayen/seq_data/sample_2.fastq

                    This only worked when I added -env to disable processor affinity. I had also included the complete path to bison_herd (this wasn't all that necessary but I did it anyway because when using bison without the complete path, the program was picking on the system bison parser generator and hence causing the "bison: invalid option --1" error). However, the error that I am seeing now is:
                    [helium-login-0-2.local:mpi_rank_0][error_sighandler] Caught error: Segmentation fault (signal 11)

                    Since this error is usually due to the program accessing an unassigned memory location, I am not sure what to alter now. I may try to scale back the optimization level in the Makefile (-O3 to -O) and see if that would help. Any suggestions?

                    Comment


                    • #25
                      I'm glad you mentioned the GNU bison parser being picked up, since I suspect that was causing the "invalid option..." messages earlier (I actually just ran into that same error for the first time a few minutes ago, since I needed to install GNU Bison to get mvapich2 set up).

                      After a bit of fiddling with configure options, I was able to get mvapich2 installed on my workstation. For what it's worth, both bison and bison_herd worked for me there and didn't cause any segfaults with the example data you had sent earlier.

                      I'm a bit at a loss at this point. I've previously run everything through valgrind to look for memory access errors, so those should be ironed out. Given what you've written, you seem fairly computer savvy, so perhaps recompile with the "-g" option to enable debug symbols, enable core dumps if needed, and then load the resulting core dump into GDB to find where the problem occurs. Another (possibly more painful, but who knows) option would be to try and get me an account on the system, so I could just directly debug things (if it helps convince the sysadmin, I'm actually from the midwest (Ohio)).

                      Other options that come to mind would be to look into BSeeker2. I've never used it, but I saw some (on the bioconductor email list?) recently make reference to it allowing local alignment. Alternatively, bsmooth might also allow local alignment, I can't say I've checked.

                      If all else fails, I can just make a third variant of bison that simply doesn't use MPI. This actually wouldn't be a very difficult.
                      Last edited by dpryan; 01-22-2014, 02:31 PM. Reason: One of these days I'll start proofreading prior to posting.

                      Comment


                      • #26
                        After a few days of playing around and trying to get bison to work, I finally got it working and it seems to be the exact program (as far as output format) I was looking for to output local methylation values! Briefly, I started over with a clean build tree, made changes to the makefile: changed directories to match samtools location and make_install, but commented out both MPI lines because according to my HPC administrator, he mentioned that the mpicc wrapper would take care of the libraries. I then loaded mvapich2_gnu_1.9a into my environment and used the bison_herd command line with MV2_ENABLE_AFFINITY=0 added on. IT WORKED! Also, I tried bison_methylation_extractor and that worked too! Thank you again for the help troubleshooting!

                        Comment


                        • #27
                          Glad to hear that things are working now. I'll add a bit into the README/Makefile of the next version regarding compiling with and using mvapich2 (it's unfortunate that every §/"$=ing MPI package does things differently).

                          Let me know if you run into any other issues or would like other features.

                          BTW, you might want to quality/adapter trim your data still (trim_galore works well enough). With the test reads you sent me I still go better results (100% alignment versus 66%) after trimming even when using local alignment (if read #2 degenerates into random sequence then local alignment ends up not helping that much, though I guess playing with the defaults might change that).

                          Comment


                          • #28
                            I will certainly try trim_galore and see how that works. Where in the file could I determine that there is 100% alignment versus 66%. What I see in the output file under alignment for the test reads that I sent you is:
                            Alignment:
                            25587 total paired-end reads analysed
                            53 paired-end reads mapped ( 0.21%).

                            Comment


                            • #29
                              You only sent me 3 reads, so that must be something different

                              Comment


                              • #30
                                Would you by any chance know if 0.21% of reads being aligned is normal?

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Recent Advances in Sequencing Analysis Tools
                                  by seqadmin


                                  The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
                                  05-06-2024, 07:48 AM
                                • seqadmin
                                  Essential Discoveries and Tools in Epitranscriptomics
                                  by seqadmin




                                  The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
                                  04-22-2024, 07:01 AM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, 05-14-2024, 07:03 AM
                                0 responses
                                26 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 05-10-2024, 06:35 AM
                                0 responses
                                45 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 05-09-2024, 02:46 PM
                                0 responses
                                59 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 05-07-2024, 06:57 AM
                                0 responses
                                46 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X