Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Large discrepancy between de novo assembly versus actual biological genome size

    Hello everyone,

    I’m in the midst of assembling a eukaryotic genome for the first time, working in a non-model plant species, and I could use some insight: my data consists of reads from a full lane of Illumina HiSeq V4 2x125 sequences with insert size ~350. Before starting my assembly, I used flow cytometry to estimate nuclear genome 2C content, which returned 2C = 0.82pg DNA or about 800Mb, for a haploid genome size of about 400Mb. However, kmer-counting programs such as Jellyfish have predicted an assembly size of less than half that number, at about 190Mb, and sure enough- when I conduct the assemblies, the sum of scaffold lengths are always in the range of 170-215Mb.

    Does anyone have any idea why the nuclear genome size is so much larger than what I’ve been able to assemble? My first hypothesis is heavy repeat content, but I need to find a way to demonstrate this hypothesis is supported by my reads, and I’m brand new to looking into repeats; I’m sure there are a sizeable set of repeats in my organism’s genome, but is there a way to estimate the approximate density of repeats as a percent of the total genome, given that I’m confident in my nuclear genome size?

    Any related thoughts/comments would be, by me, appreciated!

  • #2
    Originally posted by NYGen View Post
    Hello everyone,

    I’m in the midst of assembling a eukaryotic genome for the first time, working in a non-model plant species, and I could use some insight: my data consists of reads from a full lane of Illumina HiSeq V4 2x125 sequences with insert size ~350. Before starting my assembly, I used flow cytometry to estimate nuclear genome 2C content, which returned 2C = 0.82pg DNA or about 800Mb, for a haploid genome size of about 400Mb. However, kmer-counting programs such as Jellyfish have predicted an assembly size of less than half that number, at about 190Mb, and sure enough- when I conduct the assemblies, the sum of scaffold lengths are always in the range of 170-215Mb.

    Does anyone have any idea why the nuclear genome size is so much larger than what I’ve been able to assemble? My first hypothesis is heavy repeat content, but I need to find a way to demonstrate this hypothesis is supported by my reads, and I’m brand new to looking into repeats; I’m sure there are a sizeable set of repeats in my organism’s genome, but is there a way to estimate the approximate density of repeats as a percent of the total genome, given that I’m confident in my nuclear genome size?

    Any related thoughts/comments would be, by me, appreciated!
    My guess would be your flow cytometry result was wrong. Could be endo-reduplication or bad size standards throwing you off.

    Since a 200-300Mb genome is probably about 10X easier to assemble than a 800Mb genome, count your blessings.

    I hear you about repeats -- I would like to see a transposable element-aware assembler that tackled the repetitive fraction of the genome first.

    --
    Phillip

    Comment


    • #3
      I do not know how the flow cytometry measurment works but 800=4*200, are you sure your plant is not tetraploid?

      Comment


      • #4
        @pmiguel - I doubt that the FCM analysis is off, as we did 3 replicates and they were consistent around the value from above. I hear you, though, about the possibility of standards being off, so I'm also having two sister species that frequently hybridize with my species of interest estimated for nuclear genome content. Do you think I should also send more samples of my species of interest? I suppose if it is a standard-based error, then I should definitely send them again; I was actually going to estimate the sister taxa anyways. Perhaps if I get the results of the FCM analysis with the sister taxa and they are divergent either from my species of interest or each other, then I'll plan to send more samples of the species whose genome I'm assembling.

        @Chipper - good catch. That's been on my mind for awhile now. My species of interest is a part of a clade where each member has diploid chromosome count of 2m, where m is the 2n chromosome number of every species of the outgroup- my species is probably an ancient polyploid along with the rest of its clade. However, I'm unconvinced that I can treat this genome as coming from a polyploid because of a recent congeneric genome that was published that estimates repeat content of >50%. So, if I assume that my hi-seq reads are unable to span the majority of repeat elements, do you think there's a basis for suspecting that I'm only assembling half of the ultimate haploid genome size as a result of the repeat structures?

        Thanks for your thoughts!

        Comment


        • #5
          Dear NYGen,
          I have the same problem with my plant genome.
          Did you find any conclusion to it?

          Comment


          • #6
            Hey GAFA, I would look into estimating repeat content, which you can do with Repeat Explorer (there was at my last check a Galaxy server specifically for doing this analysis quickly in a GUI). My conclusion for my original problem was that my discrepancy occurred because of a combination of: 1) ancient tetraploidy, and more interestingly 2) high-density repeat content scenario that confounds the de Bruijn graph-based de novo assembly approach.

            First order of business is probably looking for similar analysis already being in similar taxa, if you're lucky enough to have a popular study system w/ at least one post-draft, established genome. Happy to help further, let me know.

            Comment

            Latest Articles

            Collapse

            • seqadmin
              Best Practices for Single-Cell Sequencing Analysis
              by seqadmin



              While isolating and preparing single cells for sequencing was historically the bottleneck, recent technological advancements have shifted the challenge to data analysis. This highlights the rapidly evolving nature of single-cell sequencing. The inherent complexity of single-cell analysis has intensified with the surge in data volume and the incorporation of diverse and more complex datasets. This article explores the challenges in analysis, examines common pitfalls, offers...
              06-06-2024, 07:15 AM
            • seqadmin
              Latest Developments in Precision Medicine
              by seqadmin



              Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

              Somatic Genomics
              “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
              05-24-2024, 01:16 PM

            ad_right_rmr

            Collapse

            News

            Collapse

            Topics Statistics Last Post
            Started by seqadmin, Today, 07:49 AM
            0 responses
            12 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, Yesterday, 07:23 AM
            0 responses
            14 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 06-17-2024, 06:54 AM
            0 responses
            16 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 06-14-2024, 07:24 AM
            0 responses
            24 views
            0 likes
            Last Post seqadmin  
            Working...
            X