Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • abeggs
    replied
    I have solved it!

    I recompiled from source instead of using the binaries and it worked fine.

    We have a scientific linux HPC and it seems there was something about that which was causing problems if you ran the precompiled binaries.

    Leave a comment:


  • shi
    replied
    Thanks for providing the info. Can you also send us the fastq files so that we can reproduce the problem and find out what went wrong?
    Best,
    Wei

    Leave a comment:


  • abeggs
    replied
    Hi,

    The command I use for subjunc is:
    subjunc -i $TMPDIR/hg19 -r $TMPDIR/$2_R1.fastq.gz -R $TMPDIR/$2_R2.fastq.gz -o $1/$2.bam -T 8 --gzFASTQinput --allJunctions
    And the output is:

    ========== _____ _ _ ____ _____ ______ _____
    ===== / ____| | | | _ \| __ \| ____| /\ | __ \
    ===== | (___ | | | | |_) | |__) | |__ / \ | | | |
    ==== \___ \| | | | _ <| _ /| __| / /\ \ | | | |
    ==== ____) | |__| | |_) | | \ \| |____ / ____ \| |__| |
    ========== |_____/ \____/|____/|_| \_\______/_/ \_\_____/
    v1.5.0-p1

    //============================= subjunc setting ==============================\\
    || ||
    || Function : Read alignment + Junction/Fusion detection (RNA-Seq) ||
    || Threads : 8 ||
    || Input file 1 : /scratch/beggsa_909907.bb2torque.bb2.cluster/s018 ... ||
    || Input file 2 : /scratch/beggsa_909907.bb2torque.bb2.cluster/s018 ... ||
    || Output file : /gpfs/projects/s-beggsa01/P141115-N-DW-28-2673271 ... ||
    || Index name : /scratch/beggsa_909907.bb2torque.bb2.cluster/hg19 ||
    || Phred offset : 33 ||
    || ||
    || All subreads : 14 ||
    || Min read1 votes : 1 ||
    || Min read2 votes : 1 ||
    || Max fragment size : 600 ||
    || Min fragment size : 50 ||
    || ||
    || Allowed mismatch : 3 bases ||
    || Max indels : 5 ||
    || # of Best mapping : 1 ||
    || Unique mapping : no ||
    || Hamming distance : no ||
    || Quality scores : no ||
    || ||
    \\===================== http://subread.sourceforge.net/ ======================//

    //====================== Running (31-Mar-2016 15:30:00) ======================\\
    || ||
    || The input file contains base space reads. ||
    || The range of Phred scores observed in the data is [2,36] ||
    || Load the 1-th index block... ||
    || Map fragments... ||
    || 0% completed, 0.3 mins elapsed, rate=3.7k fragments per second ||
    || Finish the 3,495,253 fragments... ||
    || 5% completed, 11 mins elapsed, rate=3.7k fragments per second ||
    || 5% completed, 11 mins elapsed, rate=3.8k fragments per second ||
    || 6% completed, 11 mins elapsed, rate=3.9k fragments per second ||
    || 6% completed, 12 mins elapsed, rate=4.0k fragments per second ||
    || 6% completed, 12 mins elapsed, rate=4.1k fragments per second ||
    || 7% completed, 12 mins elapsed, rate=4.2k fragments per second ||
    || 7% completed, 13 mins elapsed, rate=4.3k fragments per second ||
    || Map fragments... ||
    || 7% completed, 13 mins elapsed, rate=4.4k fragments per second ||
    || Finish the 3,495,253 fragments... ||
    || 13% completed, 24 mins elapsed, rate=4.1k fragments per second ||
    || 13% completed, 24 mins elapsed, rate=4.1k fragments per second ||
    || 13% completed, 25 mins elapsed, rate=4.1k fragments per second ||
    || 14% completed, 25 mins elapsed, rate=4.2k fragments per second ||
    || 14% completed, 26 mins elapsed, rate=4.2k fragments per second ||
    || 14% completed, 26 mins elapsed, rate=4.2k fragments per second ||
    || 15% completed, 26 mins elapsed, rate=4.3k fragments per second ||
    || Map fragments... ||
    || 15% completed, 27 mins elapsed, rate=4.3k fragments per second ||
    || Finish the 3,495,253 fragments... ||
    || 20% completed, 38 mins elapsed, rate=4.1k fragments per second ||
    || 21% completed, 38 mins elapsed, rate=4.1k fragments per second ||
    || 21% completed, 39 mins elapsed, rate=4.2k fragments per second ||
    || 21% completed, 39 mins elapsed, rate=4.2k fragments per second ||
    || 22% completed, 39 mins elapsed, rate=4.2k fragments per second ||
    || 22% completed, 40 mins elapsed, rate=4.2k fragments per second ||
    || 22% completed, 40 mins elapsed, rate=4.2k fragments per second ||
    || 23% completed, 41 mins elapsed, rate=4.2k fragments per second ||
    || Map fragments... ||
    || 23% completed, 41 mins elapsed, rate=4.2k fragments per second ||
    || Finish the 3,495,253 fragments... ||
    ./SubReadRNAPipeline-highmem: line 43: 19380 Segmentation fault subjunc -i $TMPDIR/hg19 -r $TMPDIR/$2_R1.fastq.gz -R $TMPDIR/$2_R2.fastq.gz -o $1/$2.bam -T 8 --gzFASTQinput --allJunctions

    Leave a comment:


  • shi
    replied
    Originally posted by abeggs View Post
    Hi

    I am running Subread on scientific linux on a HPC cluster with a GPFS file system. With big FASTQ (>35M reads) subread falls over half way through with a segmentation fault.

    It seems to be memory related as smaller files work okay. Increasing the amount of available memory to the process to 31GB seems to make no difference.

    Has anyone seen this before?
    Could you please provide the screen output and also your commands? Subread has no problems to process more than 35 million reads.

    Leave a comment:


  • GenoMax
    replied
    Dr. Shi (author of Subread) participates here and we may hear something enlightening from him. But I would have thought that once the genome index is read into memory that requirement should be more or less satisfied. Unless subread works differently (I don't use subread) than other aligners.

    Leave a comment:


  • abeggs
    replied
    Unfortunately not, the disk space assigned to the node is 10TB, the walltime is 5 days (when typically the job takes 2-3 hours) and the temp space is "unlimited" although practically is about 4TB.

    Leave a comment:


  • GenoMax
    replied
    Memory does not sound like the culprit here. Are you running into some other limit, say storage (quota) or tmp space or time assigned for job?

    Leave a comment:


  • abeggs
    started a topic Subread segmentation faults on Scientific Linux

    Subread segmentation faults on Scientific Linux

    Hi

    I am running Subread on scientific linux on a HPC cluster with a GPFS file system. With big FASTQ (>35M reads) subread falls over half way through with a segmentation fault.

    It seems to be memory related as smaller files work okay. Increasing the amount of available memory to the process to 31GB seems to make no difference.

    Has anyone seen this before?

Latest Articles

Collapse

  • seqadmin
    Best Practices for Single-Cell Sequencing Analysis
    by seqadmin



    While isolating and preparing single cells for sequencing was historically the bottleneck, recent technological advancements have shifted the challenge to data analysis. This highlights the rapidly evolving nature of single-cell sequencing. The inherent complexity of single-cell analysis has intensified with the surge in data volume and the incorporation of diverse and more complex datasets. This article explores the challenges in analysis, examines common pitfalls, offers...
    06-06-2024, 07:15 AM
  • seqadmin
    Latest Developments in Precision Medicine
    by seqadmin



    Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

    Somatic Genomics
    “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
    05-24-2024, 01:16 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Today, 02:20 PM
0 responses
9 views
0 likes
Last Post seqadmin  
Started by seqadmin, 06-07-2024, 06:58 AM
0 responses
181 views
0 likes
Last Post seqadmin  
Started by seqadmin, 06-06-2024, 08:18 AM
0 responses
228 views
0 likes
Last Post seqadmin  
Started by seqadmin, 06-06-2024, 08:04 AM
0 responses
184 views
0 likes
Last Post seqadmin  
Working...
X