Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • combine cells with same length?

    Hi all,

    I am new to pacbio, and recently working with iso-seq datasets. Take MFC7 as a example, I find there are 7 cells sequenced with 3-5kb. So I did analysis for each of them, then combine the sam files after mapping high quality cluster sequences to reference.

    When I check the wiki page of cDNA primer, it seems another tool was developed for chaining GTF.

    Then I think is it possible to provide all the 7 cells to ConsensusTools, and generate a big CCS file for them. Or maybe the better way is to feed all the 28 cells to tofu_warp to sizing automatically.

    So, I am confused about the strategy of analysing samples with more cells for different size. It seems that I have four options for construction the FL cDNA:
    1. combine all cells -> pb_warp
    2. combine cells with same length -> ConsensusTools -> classify -> cluster -> collapse -> chain
    3. do not combine -> ConsensusTools -> classify -> cluster -> collapse -> chain
    4. do not combine -> ConsensusTools -> classify -> cluster -> merge sam -> collapse -> chain

    Which one is better?

    Thanks a lot

  • #2
    Hi,

    I assume by ConsensusTools you mean you are using it to generate CCS (ReadsOfInsert) sequences. This step should be the first thing you do, before you use Iso-Seq classify or Iso-Seq cluster.

    ConsensusTools is run independently for each movie (.bax.h5 file), so it kind of does not matter whether you combine or separate the sizes.

    And since you mentioned tofu_wrap.py, I assume you are using the GitHub version of Iso-Seq and have successfully installed the GitHub ToFU using the instructions here? (https://github.com/PacificBioscience...ranscript-tofu)

    Assuming you have all the installations (including tofu_wrap.py) down, I recommend the following:

    (1) generate the CCS (ReadsOfInsert) sequences. You can do this using SMRTPortal (https://github.com/PacificBioscience...l-length-reads) or using the command line ConsensusTools (https://github.com/PacificBioscience...e-command-line)

    (2) run Iso-Seq classify. You can already either do this as part of SMRTPortal which would already be in step (1) or you can do this from command line (https://github.com/PacificBioscience...ds#commandline)

    (3) run tofu_wrap.py. This will automatically split the size bins, run them separately, and combine them at the end.


    If, you do NOT have ToFU installed and you are going to run SMRTPortal (web interface) for everything, then the only difference I recommend is split the sizes first. This is because SMRTPortal's version of Iso-Seq cluster does not automatically split the sizes for you and you have to do it manually. This also means you have to manually combine the output later.

    So to summarize you have two ways:

    # ToFU version:
    combine all cells --> ConsensusTools --> classify --> tofu_wrap

    # SMRTPortal (web interface) version:
    split the sizes --> Iso-Seq classify + cluster --> manually combine output

    Comment


    • #3
      Thanks a lot, Magdoll. You answer is very clear and helpful. Now I am using the SMRTPortal version pipeline as you described. I will try the ToFU version later because it's more straightforward. BTW, is there any benefit for the result by automatically split the size bins using the latest version? And I have one more simple questions? How about map clustered low quality reads to reference for identification of some non-FL isoforms.

      Thanks again.

      Comment


      • #4
        (1) Automatically splitting the sizes has the advantage of speed up. In general, combining sizes don't result in better performance because the Iso-Seq cluster algorithm only clusters full-length transcripts that are the same isoform of the same length.

        tofu_wrap.py also includes several extra post-processing bells and whistles like: automatically combining the split sizes, mapping it back to genome (if there is a genome), removing badly aligned transcripts, filtering, etc...

        But in terms of the core cluster algorithm, there is no different. tofu_wrap.py is simply a "wrapper" that combines a lot of simple processing that people may be doing manually or writing their own scripts for.


        (2) The output from Iso-Seq, whether high-quality (HQ) or low-quality (LQ) are all by definition "full-length". The reason we do not recommend using LQ is because the consensus accuracy for LQ is significantly worse, and they are likely worse because they are either transcript junk, artifacts, or bad sequencing reads.

        To try to get more isoforms, I would try to sequence more or make more libraries. Since it looks like you are just using the public MCF-7 dataset, I think there is more than enough data to play with!

        Also, in case you not already aware, the public MCF-7 contains both the raw data *and* the polished data from running the Iso-Seq pipeline. If you are using the dataset to learn the ropes, then you can play with the raw data. If you are interested in biology or downstream analysis, I recommend starting directly from the polished output.

        The polished output files have the prefix "IsoSeq_MCF7_2015edition_polished.XXX" in http://datasets.pacb.com.s3.amazonaw...tome/list.html

        Comment


        • #5
          Magdoll's advice is great, thanks a lot!
          By the way, I have skimmed through the indtruction of the tofu_wrap.py, it seems that the I can use parameter --bin_size_kb or --bin_manual to seperate my data. The default is seprate the FASTA by every 1kb, if my sequence length range from 0~10kb,the it will be seperated into 10 bins. Is the more bins the quicker the program run? Is there any advice to split the data?
          Thanks a lot.
          Last edited by huan; 10-27-2015, 05:07 PM.
          happy

          Comment


          • #6
            I run my project in the [# ToFU version:combine all cells --> ConsensusTools --> classify --> tofu_wrap] way.
            But when I run the tofu_wrap.py program using the following command:
            tofu_wrap.py --nfl_fa /share/nas3/tengh/SMRT/test_MCF7/Result/Classify/isoseq_nfl.fasta --ccs_fofn /share/nas3/tengh/SMRT/test_MCF7/Result/ReadsOfInsert/reads_of_insert.fofn --bas_fofn /share/nas3/tengh/SMRT/test_MCF7/input.fofn -d /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/cluster --quiver --gmap_db /share/nas3/tengh/database/gmapdb --gmap_name hg19_gmap --blasr_nproc 24 --quiver_nproc 8 --output_seqid_prefix tissue1 /share/nas3/tengh/SMRT/test_MCF7/Result/Classify/isoseq_flnc.fasta /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/final.consensus.fa,the following error occurs:
            split input /share/nas3/tengh/SMRT/test_MCF7/Result/Classify/isoseq_flnc.fasta into 18 bins
            Making fasta_fofn now
            fasta_fofn /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/cluster/fasta_fofn_files/input.fasta.fofn
            nfl_dir /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/cluster/fasta_fofn_files
            running ICE/Quiver on /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/cluster/0to1kb_part0
            daligner: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by daligner)
            Traceback (most recent call last):
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/bin/tofu_wrap.py", line 5, in
            pkg_resources.run_script('pbtools.pbtranscript==2.2.3', 'tofu_wrap.py')
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pkg_resources.py", line 534, in run_script
            self.require(requires)[0].run_script(script_name, ns)
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pkg_resources.py", line 1434, in run_script
            execfile(script_filename, namespace, namespace)
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/EGG-INFO/scripts/tofu_wrap.py", line 369, in
            tofu_wrap_main()
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/EGG-INFO/scripts/tofu_wrap.py", line 347, in tofu_wrap_main
            obj.run()
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/Cluster.py", line 271, in run
            use_ccs_qv=self.ice_opts.use_finer_qv)
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceIterative.py", line 107, in init
            sanity_check_daligner(self.script_dir)
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceUtils.py", line 57, in sanity_check_daligner
            runner.runHPC(min_match_len=300, output_dir=testDir, sensitive_mode=False)
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/icedalign/IceDalignUtils.py", line 222, in runHPC
            local_job_runner(cmds_daligner, num_processes=max(1, min(self.cpus/4, 4))) # max 4 at a time to avoid running out of mem..
            File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/icedalign/IceDalignUtils.py", line 243, in local_job_runner
            rets = pool.map(run_cmd_in_shell, cmds_list)
            File "/share/nas2/genome/biosoft/smrtanalysis/smrtanalysis/current/redist/python2.7/lib/python2.7/multiprocessing/pool.py", line 227, in map
            return self.map_async(func, iterable, chunksize).get()
            File "/share/nas2/genome/biosoft/smrtanalysis/smrtanalysis/current/redist/python2.7/lib/python2.7/multiprocessing/pool.py", line 528, in get
            raise self._value
            subprocess.CalledProcessError: Command 'timeout 600 daligner -h35 -k16 -e.80 -l300 -s100 -t10 /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/cluster/0to1kb_part0/scripts/daligner_test_dir/gcon_in.dazz.fasta.1 /share/nas3/tengh/SMRT/test_MCF7/Result/cluster/cluster/0to1kb_part0/scripts/daligner_test_dir/gcon_in.dazz.fasta.1' returned non-zero exit status 1
            I run my cmd on our SGE, is the error related with my SGE?
            What should I do with it?
            happy

            Comment


            • #7
              daligner: /lib64/libc.so.6: version 'GLIBC_2.14' not found (required by daligner)
              Seems that the node running daligner has incompatible shared libraries from the node compiling it (probably not all of your nodes have the same versions of OS).

              The easiest solution in my opinion would be to re-compile daligner and use static linking.
              You can do so by

              Code:
              cd pbtranscript-tofu/external_daligner/DALIGNER-d4aa4871122b35ac92e2cc13d9b1b1e9c5b5dc5c-ICEmod
              # edit the 1st line of Makefile from
              CFLAGS = -O3 -Wall -Wextra -fno-strict-aliasing
              # to
              CFLAGS = -O3 -Wall -Wextra -fno-strict-aliasing -static
              # then
              make clean
              make all
              cp HPCdaligner HPCmapper LA4Ice LAcat LAcheck LAmerge \
              LAshow LAsort LAsplit daligner $VENV_TOFU/bin
              You should probably do the same to DAZZ.

              Comment


              • #8
                Hi,

                Yes the more bins the smaller each individual bins will be and theoretically the faster it will be. However it also means sequences from different bins won't be clustered together. The default 1 kb works decently in practice, but if you don't have a large dataset, stretching it to 2 kb or so will also be fine.

                Comment


                • #9
                  @bowhan
                  Thanks a lot!
                  It works well now!
                  happy

                  Comment


                  • #10
                    Now I can run the daligner, but another problem troubles me again. When I run the tofu_wrap.py, the following error occurs:
                    2015-10-28 22:00:13,935 INFO [main] Multi-threaded. Input: /share/nas3/tengh/SMRT/test_MCF7/classify_seperate/test/Result/cluster/1to2kb_part0/tmp/0/c471/g_consensus.saln, Threads: 4
                    running ICE/Quiver on /share/nas3/tengh/SMRT/test_MCF7/classify_seperate/test/Result/cluster/2to3kb_part0
                    running ICE/Quiver on /share/nas3/tengh/SMRT/test_MCF7/classify_seperate/test/Result/cluster/3to4kb_part0
                    running ICE/Quiver on /share/nas3/tengh/SMRT/test_MCF7/classify_seperate/test/Result/cluster/4to5kb_part0
                    *** glibc detected *** daligner: free(): invalid next size (fast): 0x00000000021b6ae0 ***
                    ======= Backtrace: =========
                    [0x443172]
                    [0x445bdf]
                    [0x405685]
                    [0x40191c]
                    [0x4324eb]
                    [0x400429]
                    ======= Memory map: ========
                    00400000-004e8000 r-xp 00000000 00:1c 471076577 /share/nas1/tengh/software/SMRT/python_virtualenv/bin/daligner
                    006e7000-006e9000 rw-p 000e7000 00:1c 471076577 /share/nas1/tengh/software/SMRT/python_virtualenv/bin/daligner
                    006e9000-006f3000 rw-p 00000000 00:00 0
                    021b4000-0221e000 rw-p 00000000 00:00 0 [heap]
                    7fb18ba57000-7fb18ba58000 ---p 00000000 00:00 0
                    7fb18ba58000-7fb18c458000 rw-p 00000000 00:00 0
                    7fb18ce59000-7fb18ce5a000 ---p 00000000 00:00 0
                    7fb18ce5a000-7fb18d85a000 rw-p 00000000 00:00 0
                    7fb18d85a000-7fb18d85b000 ---p 00000000 00:00 0
                    7fb18d85b000-7fb18e25b000 rw-p 00000000 00:00 0
                    7fb18ec5b000-7fb18ec7e000 rw-p 00000000 00:00 0
                    7fb18eca0000-7fb18ecc1000 rw-p 00000000 00:00 0
                    7fff6e73f000-7fff6e7b3000 rw-p 00000000 00:00 0 [stack]
                    7fff6e7ca000-7fff6e7cb000 r-xp 00000000 00:00 0 [vdso]
                    ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
                    *** glibc detected *** daligner: free(): invalid next size (fast): 0x00000000006f8af0 ***
                    ======= Backtrace: =========
                    [0x443172]
                    [0x445bdf]
                    [0x405685]
                    [0x40191c]
                    [0x4324eb]
                    [0x400429]
                    ======= Memory map: ========
                    00400000-004e8000 r-xp 00000000 00:1c 471076577 /share/nas1/tengh/software/SMRT/python_virtualenv/bin/daligner
                    006e7000-006e9000 rw-p 000e7000 00:1c 471076577 /share/nas1/tengh/software/SMRT/python_virtualenv/bin/daligner
                    006e9000-006f3000 rw-p 00000000 00:00 0
                    006f6000-00760000 rw-p 00000000 00:00 0 [heap]
                    7f014aeb0000-7f014aeb1000 ---p 00000000 00:00 0
                    7f014aeb1000-7f014b8b1000 rw-p 00000000 00:00 0
                    7f014c2b2000-7f014c2b3000 ---p 00000000 00:00 0
                    7f014c2b3000-7f014ccb3000 rw-p 00000000 00:00 0
                    7f014ccb3000-7f014ccb4000 ---p 00000000 00:00 0
                    7f014ccb4000-7f014d6b4000 rw-p 00000000 00:00 0
                    7f014e0b4000-7f014e0d7000 rw-p 00000000 00:00 0
                    7f014e0f9000-7f014e11a000 rw-p 00000000 00:00 0
                    7fff0f73f000-7fff0f7b2000 rw-p 00000000 00:00 0 [stack]
                    7fff0f7b3000-7fff0f7b4000 r-xp 00000000 00:00 0 [vdso]
                    ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
                    Traceback (most recent call last):
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/bin/tofu_wrap.py", line 5, in
                    pkg_resources.run_script('pbtools.pbtranscript==2.2.3', 'tofu_wrap.py')
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pkg_resources.py", line 534, in run_script
                    self.require(requires)[0].run_script(script_name, ns)
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pkg_resources.py", line 1434, in run_script
                    execfile(script_filename, namespace, namespace)
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/EGG-INFO/scripts/tofu_wrap.py", line 369, in
                    tofu_wrap_main()
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/EGG-INFO/scripts/tofu_wrap.py", line 347, in tofu_wrap_main
                    obj.run()
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/Cluster.py", line 273, in run
                    self.icec.run()
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceIterative.py", line 1550, in run
                    use_blasr=False)
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceIterative.py", line 1379, in run_post_ICE_merging
                    for r in iters:
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceIterative.py", line 1407, in find_mergeable_consensus
                    las_filenames, las_out_filenames = runner.runHPC(min_match_len=self.minLength, output_dir=output_dir, sensitive_mode=self.daligner_sensitive_mode)
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/icedalign/IceDalignUtils.py", line 222, in runHPC
                    local_job_runner(cmds_daligner, num_processes=max(1, min(self.cpus/4, 4))) # max 4 at a time to avoid running out of mem..
                    File "/share/nas1/tengh/software/SMRT/python_virtualenv/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/icedalign/IceDalignUtils.py", line 243, in local_job_runner
                    rets = pool.map(run_cmd_in_shell, cmds_list)
                    File "/share/nas2/genome/biosoft/smrtanalysis/smrtanalysis/current/redist/python2.7/lib/python2.7/multiprocessing/pool.py", line 227, in map
                    return self.map_async(func, iterable, chunksize).get()
                    File "/share/nas2/genome/biosoft/smrtanalysis/smrtanalysis/current/redist/python2.7/lib/python2.7/multiprocessing/pool.py", line 528, in get
                    raise self._value
                    subprocess.CalledProcessError: Command 'timeout 600 daligner -h35 -k16 -e.80 -l2500 -s100 -t10 /share/nas3/tengh/SMRT/test_MCF
                    /classify_seperate/test/Result/cluster/4to5kb_part0/output/tmp.consensus.dazz.fasta.1 /share/nas3/tengh/SMRT/test_MCF
                    /classify_seperate/test/Result/cluster/4to5kb_part0/output/tmp.consensus.dazz.fasta.1' returned non-zero exit status 134.


                    How can I fix the problem?
                    Thanks for any advice O(∩_∩)O
                    Last edited by huan; 10-28-2015, 05:56 PM.
                    happy

                    Comment


                    • #11
                      Hi,all guys.I'm new to bioinformatics.Recently, i came across a problem when running tofu_wrap.py. program using the following command:
                      tofu_wrap.py --nfl_fa isoseq_nfl.fasta --ccs_fofn reads_of_insert.fofn --bas_fofn input.fofn -d clusterOut --quiver --use_sge --max_sge_jobs 120 --gmap_db /zs32/data-analysis/liucy_group/llhuang/Reflib/gmapdb --gmap_name gmapdb_h19 --output_seqid_prefix tissue1 isoseq_flnc.fasta final.consensus.fa


                      split input isoseq_flnc.fasta into 16 bins
                      Making fasta_fofn now
                      fasta_fofn /zs32/data-analysis/liucy_group/SMRTanalysis_llhuang/clusterOut/fasta_fofn_files/input.fasta.fofn
                      nfl_dir /zs32/data-analysis/liucy_group/SMRTanalysis_llhuang/clusterOut/fasta_fofn_files
                      running ICE/Quiver on /zs32/data-analysis/liucy_group/SMRTanalysis_llhuang/clusterOut/0to1kb_part0
                      Traceback (most recent call last):
                      File "/opt/VENV_TOFU/bin/tofu_wrap.py", line 4, in <module>
                      __import__('pkg_resources').run_script('pbtools.pbtranscript==2.2.3', 'tofu_wrap.py')
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pkg_resources/__init__.py", line 719, in run_script
                      self.require(requires)[0].run_script(script_name, ns)
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1504, in run_script
                      exec(code, namespace, namespace)
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/EGG-INFO/scripts/tofu_wrap.py", line 369, in <module>
                      tofu_wrap_main()
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/EGG-INFO/scripts/tofu_wrap.py", line 347, in tofu_wrap_main
                      obj.run()
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/Cluster.py", line 271, in run
                      use_ccs_qv=self.ice_opts.use_finer_qv)
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceIterative.py", line 111, in __init__
                      sanity_check_sge(sge_opts, self.script_dir)
                      File "/opt/VENV_TOFU/lib/python2.7/site-packages/pbtools.pbtranscript-2.2.3-py2.7-linux-x86_64.egg/pbtools/pbtranscript/ice/IceUtils.py", line 112, in sanity_check_sge
                      if not filecmp.cmp(consensusFa, GCON_OUT_FA):
                      File "/opt/smrtanalysis/current/redist/python2.7/lib/python2.7/filecmp.py", line 42, in cmp
                      s1 = _sig(os.stat(f1))
                      OSError: [Errno 2] No such file or directory: '/zs32/data-analysis/liucy_group/SMRTanalysis_llhuang/clusterOut/0to1kb_part0/scripts/gcon_test_dir/g_consensus.fasta'



                      I don't know why my clusterOut directory have no "g_consensus.fasta".file but "gcon_in.fa" file. Are they the same thing? Any advice would be appreciated!

                      Comment


                      • #12
                        "g_consensus.fasta" is a output generated by a test script. Its absence suggests the failure of the test job, mostly due to SGE settings.

                        Were you running tofu under `smrtshell`? If so, it is likely that your environmental setting for SGE has been wiped out. To test it, you can simply run `qstat` or submit a dummy job with `qsub` under `smrtshell` to see if SGE is available. If not, you can probably fix it by setting SGE_ROOT environmental variable.
                        Please find more discussion on this topic here:

                        Comment


                        • #13
                          @bowhan Thank you so much! I get it.

                          Comment

                          Latest Articles

                          Collapse

                          • seqadmin
                            Latest Developments in Precision Medicine
                            by seqadmin



                            Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

                            Somatic Genomics
                            “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
                            05-24-2024, 01:16 PM
                          • seqadmin
                            Recent Advances in Sequencing Analysis Tools
                            by seqadmin


                            The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
                            05-06-2024, 07:48 AM

                          ad_right_rmr

                          Collapse

                          News

                          Collapse

                          Topics Statistics Last Post
                          Started by seqadmin, 05-24-2024, 07:15 AM
                          0 responses
                          15 views
                          0 likes
                          Last Post seqadmin  
                          Started by seqadmin, 05-23-2024, 10:28 AM
                          0 responses
                          17 views
                          0 likes
                          Last Post seqadmin  
                          Started by seqadmin, 05-23-2024, 07:35 AM
                          0 responses
                          21 views
                          0 likes
                          Last Post seqadmin  
                          Started by seqadmin, 05-22-2024, 02:06 PM
                          0 responses
                          10 views
                          0 likes
                          Last Post seqadmin  
                          Working...
                          X