Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    @lh3 (Heng Li): you above "I never do simulation with error free reads." Yet you wrote on your webpage that you "simulate error free reads from the diploid genome." That is why I pointed out that you used error-free reads - you said so yourself.

    @genericforms: you assert without proof that BWA "clearly wins out" if you account for false positives. Our results contradict this. We simulated both sequencing error (using the ART simulator v1.1.5) and the results of variation between individuals in our experiments, using 3 million paired-end reads. Bowtie2 assigned more reads to their true point of origin than BWA.

    We have submitted our results in a paper which is in the peer review process right now. I encourage both of you to do the same. Un-refereed claims on this forum are little more than anecdotes (which is true of my comments too, of course, so I won't be posting any more).

    Meanwhile I encourage everyone to try Bowtie2, which in our experiments has demonstrated unparalleled speed, sensitivity, and accuracy.

    Comment


    • #17
      Steven, the sentence following "error free" explains it: "Although reads are error free, many reads cannot be perfectly mapped to the reference genome due to the presence of variations." (This sentence was on the very first version of that webpage.)

      Perhaps you are still mixing overall sensitivity with sensitivity to unique hits and specificity. It is probably my problem of not explaining it clearly. As many others are also reading this thread, I will try to do better. I only compare bwa-sw and bwa-short to avoid sensitive issues.

      I have known for a long time that on single-end 100bp real data, bwa-sw almost always correctly maps more reads than bwa-short. However, as bwa-sw does not have sufficient power to distinguish a good and a bad hit, it has to assign low mapping quality to a lot of perfectly "unique" hits to avoid giving too many high-quality false alignments. The effect is if we run a SNP caller, we sometimes call more correct SNPs from the bwa-short alignment than from bwa-sw, although bwa-sw maps much more reads. To this end, the sensitivity is only meaningful to real applications when the mapper has the ability to disambiguate good and bad hits. Bwa-sw is much more sensitive than bwa-short overall, but not always more sensitive to real applications (EDIT: bwa-sw may have better specificity for 100bp SE data, though).

      For variant calling, sensitivity is actually not the major concern. We have already dropped several percent of reads in repetitive regions and filtered tens percent of reads with the Illumina pipeline, it does not hurt too much if we have a marginally higher false negative rate. The sensitivity is even less of a concern given deep sequencing because the coverage compensates the missing alignments due to excessive sequencing errors. In contrast, specificity is much more important especially given that mapping errors tend to be recurrent: if we wrongly map one read, we are likely to wrongly map other reads in the same region affected by the same true variants. The mere sequencing coverage may not help greatly to correct wrong variant calls caused by mapping errors. It is to me critical to evaluate specificity which you have not talked about much in your posts. Note that to evaluate specificity, we have to count the fraction of reads misplaced out of mapped reads. The overall number of correctly mapped reads has little to do with specificity. If a mapper maps more correct reads but also much more wrong reads, it is still a mapper with low specificity. Take bwa-sw and bwa-short as an example again. If reads have low quality tail, bwa-sw can even map more reads than bwa-short given paired-end reads, but I know for sure that bwa-short will greatly outperform bwa-sw in terms of specificity because bwa-sw does not use the pairing information to correct wrong alignments while bwa-short does.

      Again, as I revisited the whole thread, I think we are just focusing on different measurements. We are both correct on the measurements we are interested in. Genericforms actually confirms both of us.

      IMHO, being peer-reviewed does not always mean to be more correct. If I really want to write a paper on this evaluation, I am sure with my track of record I can get it published, but this does not make me more correct than you or others. My previous evaluations on maq/bwa/bwa-sw were all flawed if I look back (I was thinking the evaluations were the best possible at the time of writing the manuscripts, but I was wrong), but they have all been accepted. My review on alignment algorithms uses a similar ROC plot, it gets peer-reviewed and published, too.

      Actually 1000g took similar procedure to evaluate read mappers about 2 years ago. I was not involved except suggesting measurements (simulation, evaluation and program running were all done by others). In some way, this is better than peer-review in that the measurement has been reviewed by many more. Also, in my benchmark, the whole procedure is open sourced and every command line is given. Everyone can try by themselves to validate if I am biased, wrong or lying. Many published papers do not have reproducibility of this level.

      Given that I always think you are correct on the measurements you are using, I will also stop posting, too. This discussion is very helpful to me. Thank you.
      Last edited by lh3; 11-07-2011, 02:17 PM. Reason: Correct grammatical errors; mention illumina pipeline

      Comment


      • #18
        Originally posted by salzberg View Post
        We have submitted our results in a paper which is in the peer review process right now. I encourage both of you to do the same. Un-refereed claims on this forum are little more than anecdotes (which is true of my comments too, of course, so I won't be posting any more).
        You don't see this kind of online discussion as part of the future of peer review then?

        Comment


        • #19
          hi Heng:
          I appreciate your clarifications which are helpful.

          I do want to mention that you are using "specificity" where I am pretty sure you mean "precision". (This is a widespread problem in the field - but I'm trying to correct it where I can.) E.g., you wrote: "If a mapper maps more correct reads but also much more wrong reads, it is still a mapper with low specificity." The definition of specificity is:
          number of true negatives/(num of true negatives + num of false positives)
          A "true negative" in the short-read alignment world is not very well defined, but we could define it as not aligning a read that doesn't belong to the genome at all. In any case, that's not what you mean.

          Precision is defined as TP/(TP+FP). So I think you mean "precision" in what you are describing.

          We know that Bowtie2 is not perfect - far from it! But we think it is a substantial improvement over Bowtie1. Ben Langmead has already made some changes (just this past week) to improve Bowtie2's accuracy. We'll keep at it.

          Comment


          • #20
            Yes, it is my fault to use a wrong term. Sorry for the confusion. To clarify, I mean we want to achieve low false positive rate (this should be right).

            Bowtie2 is definitely a substantial improvement over bowtie1 in almost every aspect, and I can really see the encouraging improvement in terms of low FPR between beta2 and beta3, all in the right direction. When you also focus your development on low FPR, probably you will gain further improvement. This will be good for everyone.

            Comment


            • #21
              So an algorithm that has a high sensitivity is likely to have a low specificity? I don't think these terms mean much outside of a hospital type test. What we want is accuracy.

              Comment


              • #22
                This is not at all what Heng, Steve, or I suggested.

                Comment


                • #23
                  For a single mapper, it is true that the more it maps, the higher FPR it has. But when you compare two mappers, it is possible for one mapper to both map more reads and have lower FPR. Then that is the better one.

                  Comment


                  • #24
                    I don't particularly wish to get drawn into a mapper war, and I'll say here that I haven't benchmarked these tools to compare. However thinking more downstream I think averaged sensitivity and specificity metrics aren't sufficient to show the whole story.

                    I agree with Heng that quality of the mapping score is very important for some forms of analysis. Furthermore I'd go to say the variance of depth is important too. Eg imagine we have two aligners that can map 95% of data and 90% of data each. The one mapping 95% maps well to 95% of the genome and atrociously to 5% of the genome, while the one mapping 90% maps across the entire genome in a relatively uniform manner - I think most would feel happier with the 90% sensitivity aligner.

                    So say we have 100mers of a simulated genome with X% of SNPs. We can algorithmically produce 100x depth by starting a new 100mer on every position in the genome, and then give them appropriate real looking quality profiles with error rates from real data, etc. (So as real as can be, but perfectly uniform distribution with known mapping locations.)

                    Then we can plot the depth distribution. How many sites are there were a particular combination of SNPs or errors has caused a dip in coverage? Given we're almost always looking for very specific locations, often around discrepancies, this is perhaps a key metric in analysis.

                    Comment


                    • #25
                      Originally posted by jkbonfield View Post
                      I think most would feel happier with the 90% sensitivity aligner.
                      Sensitivity in this context is a liability, since a high sensitivity aligner is likely to produce many erroneous alignments and base calls, which will be on the order of thousands or millions, for which there is no subsequent higher cost procedure to resolve, and manual curation is prohibitive. Furthermore they are likely to be both precise and biased, so given several identical reads it will make the same errors in the same way. Using a sensitive aligner for scaffolding for example would be a very large problem.

                      Comment


                      • #26
                        Originally posted by jkbonfield View Post
                        I don't particularly wish to get drawn into a mapper war, and I'll say here that I haven't benchmarked these tools to compare. However thinking more downstream I think averaged sensitivity and specificity metrics aren't sufficient to show the whole story.
                        Knowing the average FNR/FPR is not enough. This is where the ROC curve shows its power. It gives the full spectrum of the accuracy.

                        Originally posted by jkbonfield View Post
                        So say we have 100mers of a simulated genome with X% of SNPs. We can algorithmically produce 100x depth by starting a new 100mer on every position in the genome, and then give them appropriate real looking quality profiles with error rates from real data, etc. (So as real as can be, but perfectly uniform distribution with known mapping locations.)

                        Then we can plot the depth distribution. How many sites are there were a particular combination of SNPs or errors has caused a dip in coverage? Given we're almost always looking for very specific locations, often around discrepancies, this is perhaps a key metric in analysis.
                        But you are right that even the ROC for mappers is not informative enough for real applications. Gerton shares with your view, too. What is more informative is to know how the mapper reacts to variants, especially clustered variants or in semi-repetitive regions. The ROC for variants should be more indicative. It is just more difficult to do such analysis because we have to simulate and map much more reads to get a good picture. Most, if not all, read simulators do not get the SNP distribution right, either.

                        Comment


                        • #27
                          Originally posted by lh3 View Post
                          Knowing the average FNR/FPR is not enough. This is where the ROC curve shows its power. It gives the full spectrum of the accuracy.
                          Heng - I like the look of the ROC curve, but I cannot work out exactly how it is derived from reading your web page. For example I don't understand why some mappers have many data points, but Bowtie, Soap2 and Gsnap have only one. Could you give a brief explanation how you get from the (single file?) SAM output of a specific aligner to the plot?

                          Sorry if this is a dumb question!

                          Comment


                          • #28
                            nickloman, I believe the thing that's changing in the figures for the other mappers is the mapping quality. GSNAP, bowtie and (apparently) soap2 do not calculate the mapping quality so there is nothing to vary to get a line.

                            Comment


                            • #29
                              Hi Brent - that would make sense - varying minimum mapping quality thresholds and seeing the result. It would be nice if those values were also plotted on the graph somehow.

                              Comment


                              • #30
                                @nickloman

                                The output of the wgsim_eval.pl program looks a bit like the data below - bowtie 1 always gives a mapping score of 255 (column1). I'm guessing that bowtie 2 has many FP's at a mapping score of 1 (column3 if column1 == 1), but cumulatively finds more TP's with all mapping scores (column2 if column1 == 1). But I was also wondering the exact meaning from the output of the wgsim_eval.pl script.

                                % tail *.roc
                                ==> bowtie2.roc <==
                                14 172922 11
                                13 172925 12
                                12 177943 27
                                11 177945 28
                                10 179990 37
                                9 179995 40
                                4 180250 40
                                3 187273 578
                                2 187324 580
                                1 199331 5877

                                ==> bowtie.roc <==
                                255 86206 1740

                                ==> bwa.roc <==
                                10 192354 72
                                9 192560 107
                                8 192595 107
                                7 192628 110
                                6 192652 115
                                5 192669 116
                                4 192681 117
                                3 192731 117
                                2 192741 118
                                1 192762 119

                                Chris

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Advanced Tools Transforming the Field of Cytogenomics
                                  by seqadmin


                                  At the intersection of cytogenetics and genomics lies the exciting field of cytogenomics. It focuses on studying chromosomes at a molecular scale, involving techniques that analyze either the whole genome or particular DNA sequences to examine variations in structure and behavior at the chromosomal or subchromosomal level. By integrating cytogenetic techniques with genomic analysis, researchers can effectively investigate chromosomal abnormalities related to diseases, particularly...
                                  09-26-2023, 06:26 AM
                                • seqadmin
                                  How RNA-Seq is Transforming Cancer Studies
                                  by seqadmin



                                  Cancer research has been transformed through numerous molecular techniques, with RNA sequencing (RNA-seq) playing a crucial role in understanding the complexity of the disease. Maša Ivin, Ph.D., Scientific Writer at Lexogen, and Yvonne Goepel Ph.D., Product Manager at Lexogen, remarked that “The high-throughput nature of RNA-seq allows for rapid profiling and deep exploration of the transcriptome.” They emphasized its indispensable role in cancer research, aiding in biomarker...
                                  09-07-2023, 11:15 PM
                                • seqadmin
                                  Methods for Investigating the Transcriptome
                                  by seqadmin




                                  Ribonucleic acid (RNA) represents a range of diverse molecules that play a crucial role in many cellular processes. From serving as a protein template to regulating genes, the complex processes involving RNA make it a focal point of study for many scientists. This article will spotlight various methods scientists have developed to investigate different RNA subtypes and the broader transcriptome.

                                  Whole Transcriptome RNA-seq
                                  Whole transcriptome sequencing...
                                  08-31-2023, 11:07 AM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, 09-27-2023, 06:57 AM
                                0 responses
                                11 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 09-26-2023, 07:53 AM
                                0 responses
                                11 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 09-25-2023, 07:42 AM
                                0 responses
                                15 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 09-22-2023, 09:05 AM
                                0 responses
                                45 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X