Hi
I am running Subread on scientific linux on a HPC cluster with a GPFS file system. With big FASTQ (>35M reads) subread falls over half way through with a segmentation fault.
It seems to be memory related as smaller files work okay. Increasing the amount of available memory to the process to 31GB seems to make no difference.
Has anyone seen this before?
I am running Subread on scientific linux on a HPC cluster with a GPFS file system. With big FASTQ (>35M reads) subread falls over half way through with a segmentation fault.
It seems to be memory related as smaller files work okay. Increasing the amount of available memory to the process to 31GB seems to make no difference.
Has anyone seen this before?
Comment