I'm currently trying to run PINDEL (version 0.2.4w) on a couple of bam files generated by tophat2 from an RNASeq experiment. One bam is ~30 million reads, the other is ~200 million. Both seem to stall at the same step (I killed the jobs after two days). Here is the structure of the command I used:
/path/to/./pindel -f /path/to/hg19.fa -i /path/to/config.txt -c ALL -T 8 -x 3 -o /path/to/output
Here is the tail of my error log:
adding 1 1389428 1390055 - 627 1 1454030 1454671 + 641 to breakdancer events. 64602 Support: 5
adding 1 1389428 1390055 - 627 1 1415976 1416617 + 641 to breakdancer events. 26548 Support: 5
modify and summarize interchr RP.
Reads_RP.size(): 1003557
sorting read-pair
sorting InterChr read-pair finished.
Again, jobs from both bams are stall at the same spot. Using qstat -j, I can see that each job appears to still be running, or at least it's using CPU time and memory. Here are my qsub options related to memory/processing on an SGE cluster:
-pe smp 8
-l h_vmem=4g,virtual_free=3g
I was able to run the demo without any issues. What am I doing wrong?
/path/to/./pindel -f /path/to/hg19.fa -i /path/to/config.txt -c ALL -T 8 -x 3 -o /path/to/output
Here is the tail of my error log:
adding 1 1389428 1390055 - 627 1 1454030 1454671 + 641 to breakdancer events. 64602 Support: 5
adding 1 1389428 1390055 - 627 1 1415976 1416617 + 641 to breakdancer events. 26548 Support: 5
modify and summarize interchr RP.
Reads_RP.size(): 1003557
sorting read-pair
sorting InterChr read-pair finished.
Again, jobs from both bams are stall at the same spot. Using qstat -j, I can see that each job appears to still be running, or at least it's using CPU time and memory. Here are my qsub options related to memory/processing on an SGE cluster:
-pe smp 8
-l h_vmem=4g,virtual_free=3g
I was able to run the demo without any issues. What am I doing wrong?