All,
I am having difficulty deciphering my results.
Briefly, when I use -n (FLOAT) during bwa aln, my expectation is that this is analogous to the number of mismatches/seed using a sliding scale, rather than a hard integer number of mismatches.
I am starting to suspect that this is incorrect.
When I run a range of -n (FLOAT) from 0.02 - 0.1, I get more reads mapping to my reference using .02 than .1.
Can someone explain exactly what -n FLOAT is doing?
Thank you.
I am having difficulty deciphering my results.
Briefly, when I use -n (FLOAT) during bwa aln, my expectation is that this is analogous to the number of mismatches/seed using a sliding scale, rather than a hard integer number of mismatches.
I am starting to suspect that this is incorrect.
When I run a range of -n (FLOAT) from 0.02 - 0.1, I get more reads mapping to my reference using .02 than .1.
Can someone explain exactly what -n FLOAT is doing?
Thank you.
Comment