Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bueller_007
    replied
    Originally posted by Chipper View Post
    Wouldn't removing all identical reads result in enrichment of reads with errorrs? Perhaps filterting on the first part and allowing some duplicates would work better.
    Probably true. That's why it's better to remove duplicates after alignment/assembly. Unfortunately, I'm feeding the end-product to CLC Genomics Workbench and they don't have duplicate removal yet. The dupes are messing up my SNP discovery pretty badly.

    I'd turn on a maximum coverage limit, but since it's a transcriptome, the coverage varies with expression level, so I'm hesitant to omit highly covered regions. I've tried exporting to BAM, removing dupes with Picard and importing back in, but the reimport didn't work for whatever reason.

    Leave a comment:


  • Chipper
    replied
    Wouldn't removing all identical reads result in enrichment of reads with errorrs? Perhaps filterting on the first part and allowing some duplicates would work better.

    Leave a comment:


  • Bueller_007
    replied
    Thanks. I didn't get email notifications that people had replied to my post, so I didn't find these until just now.

    For what it's worth, I believe that FASTX_collapser ( http://hannonlab.cshl.edu/fastx_toolkit/ ) can also do this, with the caveat that your .csfasta and _QV.qual have to be merged into a .fastq first (with the .csfasta double-encoded) if you also want to remove the duplicates from your _QV.qual file.

    Leave a comment:


  • drio
    replied
    Originally posted by nilshomer View Post
    A few lines of your favorite programming language should be able to do it. Lexicographically sort by sequence and remove duplicates.
    Something like this: http://github.com/drio/dups.fasta.qual

    Leave a comment:


  • nilshomer
    replied
    Originally posted by Bueller_007 View Post
    I don't think I need alignments, as I'm talking about identical ~reads~. Removing these duplicates can be performed by Corona prior to data output using the --noduplicates option. However, I can't find an equivalent for data that has already been outputted by the SOLiD system.

    There are multiple programs available for filtering out low-quality reads. That's not what I need.
    A few lines of your favorite programming language should be able to do it. Lexicographically sort by sequence and remove duplicates.

    Leave a comment:


  • Bueller_007
    replied
    Originally posted by drio View Post
    I don't think there is anything like that out there. You need alignments to detect duplicates.
    About the SOLiD instrument filtering, perhaps you are talking about dropping reads with low quality?
    I don't think I need alignments, as I'm talking about identical ~reads~. Removing these duplicates can be performed by Corona prior to data output using the --noduplicates option. However, I can't find an equivalent for data that has already been outputted by the SOLiD system.

    There are multiple programs available for filtering out low-quality reads. That's not what I need.

    Leave a comment:


  • drio
    replied
    I don't think there is anything like that out there. You need alignments to detect duplicates.
    About the SOLiD instrument filtering, perhaps you are talking about dropping reads with low quality?

    Leave a comment:


  • Removing duplicate reads from multigig .csfasta

    Hi all.

    I'm trying to do a de novo transcriptome assembly using ABI SOLiD data. I'm trying to use Velvet/Oases at the moment, and I've found that PCR duplicates seem to be a serious problem during the postprocessing step when the double-encoded contigs are converted back into colour-space reads prior to the final assembly. This step takes at least 72 hours, which is an order of magnitude greater than the time required by the Velvet/Oases assemblers themselves. The postprocessing output file just keeps swelling in size because there are so many PCR duplicates.

    So the question is: is there an efficient program out there I can use to remove duplicate reads from my .csfasta (and preferably the corresponding _QV.qual) file prior to assembly? I know there's an option to do this filtering on the SOLiD machine itself, but the person who did the sequencing didn't enable it.

    Thanks.

Latest Articles

Collapse

  • seqadmin
    Current Approaches to Protein Sequencing
    by seqadmin


    Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
    04-04-2024, 04:25 PM
  • seqadmin
    Strategies for Sequencing Challenging Samples
    by seqadmin


    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
    03-22-2024, 06:39 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 04-11-2024, 12:08 PM
0 responses
31 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 10:19 PM
0 responses
33 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 09:21 AM
0 responses
28 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-04-2024, 09:00 AM
0 responses
53 views
0 likes
Last Post seqadmin  
Working...
X