Hi all,
I am a newbie to bioinformatics. I've got ChIP-Seq data for 3 different proteins and 2 chromatin modification markers for two cancer cell lines. One library was prepared for each sample and run two different lanes (HiSeqV4). I thus have two files for each sample (one from each lane). The sequencing facility gives aligned data as CRAM files. I have converted CRAM into BAM and merged them using different tools: Picard, samtools, bamtools to see if it makes any difference. (Apparently, it doesn't). After merging, I have at least over 80 million reads for each sample. Next, using 'samtools view -b -f 2 -F4 -q 1' I filter the data to remove reads not in pairs and to select uniquely mapped reads. This filtering step reduces the number of reads by ~10 million. Next, I check the stats of the filtered reads for each sample using bamtools stats functions. It finds a lot of PCR duplication in almost all samples. The level of duplication ranges from 43% to over 80% in the samples. The Input samples (genomic DNA with no IP) has very little duplication (below 1%). I am assuming something has gone wrong either with library prepartion or at another step. Is this data too bad to work with? Is anyone able to suggest where the problem might be? If I deduplicate the data using picardtools, I am left with as little as 3 million reads. Can I peak call on this little reads?
Any input on this will be highly appreciated!
I am a newbie to bioinformatics. I've got ChIP-Seq data for 3 different proteins and 2 chromatin modification markers for two cancer cell lines. One library was prepared for each sample and run two different lanes (HiSeqV4). I thus have two files for each sample (one from each lane). The sequencing facility gives aligned data as CRAM files. I have converted CRAM into BAM and merged them using different tools: Picard, samtools, bamtools to see if it makes any difference. (Apparently, it doesn't). After merging, I have at least over 80 million reads for each sample. Next, using 'samtools view -b -f 2 -F4 -q 1' I filter the data to remove reads not in pairs and to select uniquely mapped reads. This filtering step reduces the number of reads by ~10 million. Next, I check the stats of the filtered reads for each sample using bamtools stats functions. It finds a lot of PCR duplication in almost all samples. The level of duplication ranges from 43% to over 80% in the samples. The Input samples (genomic DNA with no IP) has very little duplication (below 1%). I am assuming something has gone wrong either with library prepartion or at another step. Is this data too bad to work with? Is anyone able to suggest where the problem might be? If I deduplicate the data using picardtools, I am left with as little as 3 million reads. Can I peak call on this little reads?
Any input on this will be highly appreciated!
Comment