Seqanswers Leaderboard Ad



No announcement yet.
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    If duplications / deletions are rare enough, then median coverage should be fine. A median statistic will typically deal with the spikes and troughs that are an issue for using mean as a descriptive statistic.


    • #47
      Originally posted by mrood View Post
      It seems to me that if you are mapping to a reference genome and there are regions that have more than twice the average coverage that it is probably the result of a duplication or something in the genome of the sequenced organism.
      Natural variation in sequencing coverage could easily make 2 fold differences in coverage, or more.

      Likewise, if it has very poor coverage the organism likely does not have that region in its genome and it is likely the result of improper mapping.
      Or, the region is there, but so divergent from your reference that reads are mapping poorly, or the region could be GC rich or something, causing few reads to be generated there.


      • #48
        I am trying to calculate the average coverage for a given region , e.g 200 bps where my reads are aligned. Is there any software that can do that without actually having to write any commands. Please note that I have no bioinformatics background and don't have access to a linux, etc operating system. The best solution I have until now is to use Savant genome browser and convert the .bam files into .bam.cov.tdf files which shows me the maximum coverage.


        • #49
          Is there any software that can do that without actually having to write any commands
          Er, you want a program to run that means you don't have to run a program? That's a difficult request.

          I suppose you could try using Galaxy, which hides all that pesky "running commands" stuff from you. That has a feature coverage tool, but requires input files to be in BED format, and presumably there are other tools closer to what you desire. From this email:

          To calculate coverage, please see the tool "Regional Variation ->
          Feature coverage". Query and target must both be in Interval/BED format.
          Query data in Interval/BED format is possible in most of the dataflow
          paths through the tools and from external sources. The reference genome
          file will likely need to be imported and formatted.


          • #50
            calculating coverage depth

            Originally posted by westerman View Post
            From my understanding yes they are different and what you are calculating is the 'X' coverage. I.e., given the number of raw bases sequenced how many times (or X) does the sequencing potentially cover the genome.

            % coverage is how well the genome is actually covered after all mapping and assembly is done.

            As an example let's say we have 300M reads of 50 bases or 1.5 Gbase total. Our genome is 150M bases. After mapping (or assembly) we have a bunch of non-overlapping contigs that have 100M bases total.

            So our 'X coverage' is 10X (1.5 Gbases / 150 Mbases)
            Our '% coverage' is 66.6% (100 Mbases / 150 Mbases)

            One way to think about this is that percentages generally range from 0% to 100% and so having a percentage greater that 100 can be confusing.

            I use the haploid genome size or more specifically the C-value times 965Mbases/pg.

            I went thru this post and understood how do we express coverage depth. But I need a small clarification. Does this coverage depth involve mutations in the reads [i mean non matching positions with respect to reference sequence], since it only takes the number of bases in the sample and the no. of bases in the reference sequence.

            2. if a read matches at more than one location, then will the coverage depth not increase. is there a way to reduce that error?
            Sr. Application Scientist, Apsara Innovations, Bangalore
            E-Mail: [email protected]


            • #51
              1. The SNP/Indels are usually not a big part of the genome; I doubt it they would throw off the calculations by a percent.

              2. Count the read only once; i.e., choose the best match or if multiple best matches then just choose one match by random.

              Really, unless you are working with a well characterized organism (e.g., human) then the numbers are going to be 'squishy' in any case. They are mainly there to give you an idea of how good your sequencing is. In other words if you calculate that you had 50x coverage (which is a nice de-novo assembly target) but only get 10% coverage to a closely related organism then that tells you something.


              • #52
                Hi everyone;

                I just have read your post but I still have my doubt in mind.

                I'm working with Ion PGM to generate whole genome sequences of some RNA viruses. Then I want to do a phylognetic tree with the consensus sequences of each virus I could identify. So the question is, how many coverage or reads per base I need in order to make a good consensus sequences to my phylogenetic analysis.
                I don't want to see or analyze the variant or quasespecies. So I need just the minimal necessary

                thanks for your time
                my best



                • #53
                  50-100x coverage per genome is good. Much more than that and you will start getting misassemblies.

                  For virus I suggest using Mira. It is a good small genome assembler than can handle a lot of potential misassemblies.


                  • #54
                    thanks westerman


                    Latest Articles


                    • seqadmin
                      Best Practices for Single-Cell Sequencing Analysis
                      by seqadmin

                      While isolating and preparing single cells for sequencing was historically the bottleneck, recent technological advancements have shifted the challenge to data analysis. This highlights the rapidly evolving nature of single-cell sequencing. The inherent complexity of single-cell analysis has intensified with the surge in data volume and the incorporation of diverse and more complex datasets. This article explores the challenges in analysis, examines common pitfalls, offers...
                      06-06-2024, 07:15 AM
                    • seqadmin
                      Latest Developments in Precision Medicine
                      by seqadmin

                      Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

                      Somatic Genomics
                      “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
                      05-24-2024, 01:16 PM





                    Topics Statistics Last Post
                    Started by seqadmin, Today, 08:58 AM
                    0 responses
                    Last Post seqadmin  
                    Started by seqadmin, Yesterday, 02:20 PM
                    0 responses
                    Last Post seqadmin  
                    Started by seqadmin, 06-07-2024, 06:58 AM
                    0 responses
                    Last Post seqadmin  
                    Started by seqadmin, 06-06-2024, 08:18 AM
                    0 responses
                    Last Post seqadmin