Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bueller_007
    replied
    Originally posted by Chipper View Post
    Wouldn't removing all identical reads result in enrichment of reads with errorrs? Perhaps filterting on the first part and allowing some duplicates would work better.
    Probably true. That's why it's better to remove duplicates after alignment/assembly. Unfortunately, I'm feeding the end-product to CLC Genomics Workbench and they don't have duplicate removal yet. The dupes are messing up my SNP discovery pretty badly.

    I'd turn on a maximum coverage limit, but since it's a transcriptome, the coverage varies with expression level, so I'm hesitant to omit highly covered regions. I've tried exporting to BAM, removing dupes with Picard and importing back in, but the reimport didn't work for whatever reason.

    Leave a comment:


  • Chipper
    replied
    Wouldn't removing all identical reads result in enrichment of reads with errorrs? Perhaps filterting on the first part and allowing some duplicates would work better.

    Leave a comment:


  • Bueller_007
    replied
    Thanks. I didn't get email notifications that people had replied to my post, so I didn't find these until just now.

    For what it's worth, I believe that FASTX_collapser ( http://hannonlab.cshl.edu/fastx_toolkit/ ) can also do this, with the caveat that your .csfasta and _QV.qual have to be merged into a .fastq first (with the .csfasta double-encoded) if you also want to remove the duplicates from your _QV.qual file.

    Leave a comment:


  • drio
    replied
    Originally posted by nilshomer View Post
    A few lines of your favorite programming language should be able to do it. Lexicographically sort by sequence and remove duplicates.
    Something like this: http://github.com/drio/dups.fasta.qual

    Leave a comment:


  • nilshomer
    replied
    Originally posted by Bueller_007 View Post
    I don't think I need alignments, as I'm talking about identical ~reads~. Removing these duplicates can be performed by Corona prior to data output using the --noduplicates option. However, I can't find an equivalent for data that has already been outputted by the SOLiD system.

    There are multiple programs available for filtering out low-quality reads. That's not what I need.
    A few lines of your favorite programming language should be able to do it. Lexicographically sort by sequence and remove duplicates.

    Leave a comment:


  • Bueller_007
    replied
    Originally posted by drio View Post
    I don't think there is anything like that out there. You need alignments to detect duplicates.
    About the SOLiD instrument filtering, perhaps you are talking about dropping reads with low quality?
    I don't think I need alignments, as I'm talking about identical ~reads~. Removing these duplicates can be performed by Corona prior to data output using the --noduplicates option. However, I can't find an equivalent for data that has already been outputted by the SOLiD system.

    There are multiple programs available for filtering out low-quality reads. That's not what I need.

    Leave a comment:


  • drio
    replied
    I don't think there is anything like that out there. You need alignments to detect duplicates.
    About the SOLiD instrument filtering, perhaps you are talking about dropping reads with low quality?

    Leave a comment:


  • Removing duplicate reads from multigig .csfasta

    Hi all.

    I'm trying to do a de novo transcriptome assembly using ABI SOLiD data. I'm trying to use Velvet/Oases at the moment, and I've found that PCR duplicates seem to be a serious problem during the postprocessing step when the double-encoded contigs are converted back into colour-space reads prior to the final assembly. This step takes at least 72 hours, which is an order of magnitude greater than the time required by the Velvet/Oases assemblers themselves. The postprocessing output file just keeps swelling in size because there are so many PCR duplicates.

    So the question is: is there an efficient program out there I can use to remove duplicate reads from my .csfasta (and preferably the corresponding _QV.qual) file prior to assembly? I know there's an option to do this filtering on the SOLiD machine itself, but the person who did the sequencing didn't enable it.

    Thanks.

Latest Articles

Collapse

  • seqadmin
    Recent Advances in Sequencing Technologies
    by seqadmin



    Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

    Long-Read Sequencing
    Long-read sequencing has seen remarkable advancements,...
    12-02-2024, 01:49 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Yesterday, 08:24 AM
0 responses
10 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-12-2024, 07:41 AM
0 responses
10 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-11-2024, 07:45 AM
0 responses
15 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-10-2024, 07:59 AM
0 responses
14 views
0 likes
Last Post seqadmin  
Working...
X