Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sol
    replied
    Please


    I did the transcriptome SOLID, but reads not fully aligned, I'm losing 16 million of the data. what happened? where can I find? which software to use? I used the Bioscope. Can I use phred filter??
    thanks

    Leave a comment:


  • Lee Sam
    replied
    Originally posted by fkrueger View Post
    I was talking about the -e ceiling (not the -n ceiling which applies for the seed length, which can be anything between 0 and 3), which is defined as follows:

    -e/--maqerr <int>
    Maximum permitted total of quality values at all mismatched read positions throughout the entire alignment, not just in the "seed". The default is 70. Like Maq, bowtie rounds quality values to the nearest 10 and saturates at 30; rounding can be disabled with --nomaqround.

    This means that each mismatch in a fastA file counts as 30, i.e. 3 mismatches (even if it is quite close to the 3' end of the sequence) willexceed the default value of 70 and cause the sequence to be removed (and scored as not aligned).
    I see the manual changed a little since 0.10.0 (last version where I regularly read the manual), which read:

    -e/--maqerr <int> The maximum permitted total of quality values at
    mismatched read positions. This total is also
    called the "quality-weighted hamming distance" or
    "Q-distance." This is analogous to the -e option
    for "maq map". The default is 70. Note that,
    like Maq, Bowtie rounds quality values to the
    nearest 10 and saturates at 30.
    I was confused in my understanding of how the -e parameter works then. I had been under the impression that the totals were only within the seed region. I stand corrected.

    Leave a comment:


  • fkrueger
    replied
    I wasn't talking about an error rate during the bowtie alignment, but the error rate of the sequencing run. We have seen quite a few runs where the error rate increased drastically towards the end of long runs (such as 75 bp). If actualy errors from the sequencing run get transformed into fastA files, a Phred score of 40, and therefore a very reliable (but wrong) basecall is assumed for that base.

    I was talking about the -e ceiling (not the -n ceiling which applies for the seed length, which can be anything between 0 and 3), which is defined as follows:

    -e/--maqerr <int>
    Maximum permitted total of quality values at all mismatched read positions throughout the entire alignment, not just in the "seed". The default is 70. Like Maq, bowtie rounds quality values to the nearest 10 and saturates at 30; rounding can be disabled with --nomaqround.

    This means that each mismatch in a fastA file counts as 30, i.e. 3 mismatches (even if it is quite close to the 3' end of the sequence) willexceed the default value of 70 and cause the sequence to be removed (and scored as not aligned).

    Leave a comment:


  • IrisZhu
    replied
    Originally posted by Lee Sam View Post
    A high error rate doing bowtie alignments shouldn't make a difference using the parameters she specifies since a seed region from the 5' end of the read is used to do the alignment (if memory serves me correctly).

    The bowtie manual (http://bowtie-bio.sourceforge.net/manual.shtml) says:


    If she was using the end-to-end alignment mode (which at lest used to be an option) errors in the reads may be an issue causing this, but the default is the seed-and-extend method. It shouldn't be that hard to find unique 28-mers...
    Oh I see i got it wrong. The 2 mismatches are for the seed length (28 bp from the high-quality end). But does it mean that it allows lots of mismatches in the other end?
    Thank you so much guys. I learned a lot!

    Leave a comment:


  • Lee Sam
    replied
    Originally posted by fkrueger View Post
    Hi Iris,

    There is another potential issue I can think of which might cause such a low mapping percentage, and that is quite long reads paired with a high error rate.

    As you are apparently using fasta sequences a Phred score of 40 is assumed for each base (making it impossible to look for base call error rates). If the sequencing run had a high error rate towards the end of the run you will potentially accumulate too many high quality mismatches which might result in the rejection of alignments. If this is the case you could try and raise the ceiling of cumulative mismatch scores (default 70) to 150 or so (-e 150). Alternatively you could trim the read length down to a value which should not have a high error rate, e.g. trim 100bp reads down to 50 or 60 bases which should be plenty to allow unique mapping.

    Good luck!
    A high error rate doing bowtie alignments shouldn't make a difference using the parameters she specifies since a seed region from the 5' end of the read is used to do the alignment (if memory serves me correctly).

    The bowtie manual (http://bowtie-bio.sourceforge.net/manual.shtml) says:
    -l/--seedlen <int>
    The "seed length"; i.e., the number of bases on the high-quality end of the read to which the -n ceiling applies. The lowest permitted setting is 5 and the default is 28. bowtie is faster for larger values of -l.
    If she was using the end-to-end alignment mode (which at least used to be an option in the past) errors in the reads may be an issue causing this, but the default is the seed-and-extend method. It shouldn't be that hard to find unique 28-mers...
    Last edited by Lee Sam; 08-11-2010, 12:28 PM.

    Leave a comment:


  • IrisZhu
    replied
    Originally posted by fkrueger View Post
    Hi Iris,

    There is another potential issue I can think of which might cause such a low mapping percentage, and that is quite long reads paired with a high error rate.

    As you are apparently using fasta sequences a Phred score of 40 is assumed for each base (making it impossible to look for base call error rates). If the sequencing run had a high error rate towards the end of the run you will potentially accumulate too many high quality mismatches which might result in the rejection of alignments. If this is the case you could try and raise the ceiling of cumulative mismatch scores (default 70) to 150 or so (-e 150). Alternatively you could trim the read length down to a value which should not have a high error rate, e.g. trim 100bp reads down to 50 or 60 bases which should be plenty to allow unique mapping.

    Good luck!
    Thanks for your reply.
    Is what you suggested (raise the mismatch score) equivalent to increasing the # of mismatches? Now the default is 2 (allowing 2 mismatch bases), so I can increase it to, say, 5? But either way will compromise the accuracy, right? --- more unqualified reads will get mapped.

    Leave a comment:


  • fkrueger
    replied
    Hi Iris,

    There is another potential issue I can think of which might cause such a low mapping percentage, and that is quite long reads paired with a high error rate.

    As you are apparently using fasta sequences a Phred score of 40 is assumed for each base (making it impossible to look for base call error rates). If the sequencing run had a high error rate towards the end of the run you will potentially accumulate too many high quality mismatches which might result in the rejection of alignments. If this is the case you could try and raise the ceiling of cumulative mismatch scores (default 70) to 150 or so (-e 150). Alternatively you could trim the read length down to a value which should not have a high error rate, e.g. trim 100bp reads down to 50 or 60 bases which should be plenty to allow unique mapping.

    Good luck!

    Leave a comment:


  • IrisZhu
    replied
    Originally posted by Zigster View Post
    no but that's easy enough to infer:

    #get defline, strip >
    grep '>' mate1.fa | sed -e 's/>//' > in.seqs

    #get first col with seqnames
    cut -f1 output.bowtie > out.seqs

    #get symmetric difference
    sort in.seqs out.seqs | uniq -u
    Oh yes we can do it this way .... Thank you!

    Leave a comment:


  • IrisZhu
    replied
    Originally posted by babati View Post
    IrisZhu,

    to make bowtie report the unmapped reads, use the command line option:
    --un <output_file.fa>

    Do you know the protocol used to generate the RNAseq data? For example, the totalRNA protocol yields a lot of ribosomal and non-coding RNAs, most of which are not in RefSeq, while polyA-selected protocol should yield more coding RNAs.

    Also, what is the source of your data? Some highly-expressed transcripts in diseased tissues may not be in RefSeq.

    In either case, you should try Lee Sam's suggestion and map directly to the genome and see how many more reads map that didn't map to RefSeq. Let us know what you find!
    My sample is supposed to be mRNA. I mapped them to the whole genome, ~30% was mapped. If i used bowtie properly , then it must be the problems of sequencing data itself.
    And I just tried mapping another batch of data to human refseqs, the results are much more reasonble:
    reads processed: 76269225
    # reads with at least one reported alignment: 46230181 (60.61%)
    # reads that failed to align: 29912348 (39.22%)
    # reads with alignments suppressed due to -m: 126696 (0.17%)


    BTW, is my command "bowtie -p 10 -m 10 -f hg19 myfile.fa (for unpaired reads)" right?
    Thanks!

    Leave a comment:


  • Zigster
    replied
    Originally posted by babati View Post
    IrisZhu,
    to make bowtie report the unmapped reads, use the command line option:
    --unmapped <output_file.fa>
    can you point me to where that feature is described? it is not working for me.

    Leave a comment:


  • babati
    replied
    IrisZhu,

    to make bowtie report the unmapped reads, use the command line option:
    --un <output_file.fa>

    Do you know the protocol used to generate the RNAseq data? For example, the totalRNA protocol yields a lot of ribosomal and non-coding RNAs, most of which are not in RefSeq, while polyA-selected protocol should yield more coding RNAs.

    Also, what is the source of your data? Some highly-expressed transcripts in diseased tissues may not be in RefSeq.

    In either case, you should try Lee Sam's suggestion and map directly to the genome and see how many more reads map that didn't map to RefSeq. Let us know what you find!
    Last edited by babati; 08-10-2010, 01:15 PM.

    Leave a comment:


  • Zigster
    replied
    Originally posted by IrisZhu View Post
    Anyone knows how to make bowtie report unmapped reads?
    What's the option pamameter?
    Thanks!
    Iris
    no but that's easy enough to infer:

    #get defline, strip >
    grep '>' mate1.fa | sed -e 's/>//' > in.seqs

    #get first col with seqnames
    cut -f1 output.bowtie > out.seqs

    #get symmetric difference
    sort in.seqs out.seqs | uniq -u

    Leave a comment:


  • IrisZhu
    replied
    Originally posted by Lee Sam View Post
    that's why I'm a bit mystified by Ben's comment about insert sizes because that shouldn't be an issue if the fragment you're sequencing is basically represented by the RefSeq transcript. Off the top of my head (e.g. I'm not thinking about this very hard), my initial suspicions might be that you're sequencing some strange artifacts (maybe some contamination?). Maybe try to align a subset of your reads to the set of UCSC transcripts to see if the number improves substantially (as you might expect with more references?). Also, I noticed you aren't supplying Bowtie with arguments related to the inset size? Are your insert sizes in line with the bowtie defaults?



    Looks like you beat me to posting. I guess the insert size issue is refuted by the low individual mapping rate... Might as well align to hg18/19 to see if you've got genomic contamination, right?
    I mapped two mates seperately, i.e. in a unpaired manner. So no inset size here.

    Leave a comment:


  • IrisZhu
    replied
    Anyone knows how to make bowtie report unmapped reads?
    What's the option pamameter?
    Thanks!
    Iris

    Leave a comment:


  • Lee Sam
    replied
    Originally posted by IrisZhu View Post
    I am supposed to. That's why I don't use tophat.
    that's why I'm a bit mystified by Ben's comment about insert sizes because that shouldn't be an issue if the fragment you're sequencing is basically represented by the RefSeq transcript. Off the top of my head (e.g. I'm not thinking about this very hard), my initial suspicions might be that you're sequencing some strange artifacts (maybe some contamination?). Maybe try to align a subset of your reads to the set of UCSC transcripts to see if the number improves substantially (as you might expect with more references?). Also, I noticed you aren't supplying Bowtie with arguments related to the inset size? Are your insert sizes in line with the bowtie defaults?

    Originally posted by IrisZhu View Post
    Following Ben's suggestion, so I mapped the two mates seperately. But still only a small percentage of reads get mapped.

    ===mate1=====
    reads processed: 130653415
    # reads with at least one reported alignment: 47875358 (36.64%)
    # reads that failed to align: 82659307 (63.27%)
    # reads with alignments suppressed due to -m: 118750 (0.09%)
    Reported 47875358 alignments to 1 output stream(s)

    ====mate2=====
    # reads processed: 130653415
    # reads with at least one reported alignment: 35773578 (27.38%)
    # reads that failed to align: 94798249 (72.56%)
    # reads with alignments suppressed due to -m: 81588 (0.06%)
    Reported 35773578 alignments to 1 output stream(s)

    If bowtie works well, then there are two explanations: either the human refseqs are far from complete, or, the sample is contaminated with a lot of pre-mRNA/genomic DNA.
    Any comments are welcome

    Iris
    Looks like you beat me to posting. I guess the insert size issue is refuted by the low individual mapping rate... Might as well align to hg18/19 to see if you've got genomic contamination, right?
    Last edited by Lee Sam; 08-10-2010, 12:06 PM.

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Recent Advances in Sequencing Analysis Tools
    by seqadmin


    The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
    05-06-2024, 07:48 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Today, 07:35 AM
0 responses
3 views
0 likes
Last Post seqadmin  
Started by seqadmin, Yesterday, 02:06 PM
0 responses
8 views
0 likes
Last Post seqadmin  
Started by seqadmin, 05-14-2024, 07:03 AM
0 responses
27 views
0 likes
Last Post seqadmin  
Started by seqadmin, 05-10-2024, 06:35 AM
0 responses
47 views
0 likes
Last Post seqadmin  
Working...
X