Hi,
perhaps somebody can solve this little mystery for me.
Running bowtie --chunkmbs 512 worked without any errors/warnings.
As the mapping statistic was not as I expected, among other things, I also increased the value for chunkmbs to 4096. However, on the same dataset, with the other parameters untouched, bowtie now reports "Exhausted best-first chunk memory for read..." millions of times.
At this point, the sever was using only 10% of it's overall memory.
Could somebody explain this to me please?
Chris
perhaps somebody can solve this little mystery for me.
Running bowtie --chunkmbs 512 worked without any errors/warnings.
As the mapping statistic was not as I expected, among other things, I also increased the value for chunkmbs to 4096. However, on the same dataset, with the other parameters untouched, bowtie now reports "Exhausted best-first chunk memory for read..." millions of times.
At this point, the sever was using only 10% of it's overall memory.
Could somebody explain this to me please?
Chris
Comment