Hi all,
I am asking a stupid question, but I am now really interested. Is there any reason, apart from decreasing the amount of computation time, for processing reads into clustered peaks and then annotating those peaks? Suppose you have a good machine and a lot of computer time. Then why would not you just annotate the primary reads and then make all statistics on them, without losing some valuable information on the intermediate step of peak identification?
Thanks!
I am asking a stupid question, but I am now really interested. Is there any reason, apart from decreasing the amount of computation time, for processing reads into clustered peaks and then annotating those peaks? Suppose you have a good machine and a lot of computer time. Then why would not you just annotate the primary reads and then make all statistics on them, without losing some valuable information on the intermediate step of peak identification?
Thanks!
Comment