Announcement

Collapse
No announcement yet.

Get coverage of each site on contig generated by Velvet

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Get coverage of each site on contig generated by Velvet

    Hello everyone,
    I am now working with velvet denove assemly of my short reads data. The result directory contains the following files (contigs.fa Graph2 LastGraph Log PreGraph Roadmaps Sequences stats.txt velvet_asm.afg). I want to get the depth of coverage of each site on contig sequences contained in "contig.fa". Is the coverage information contained in "velvet_asm.afg" file?
    Any suggestion will be very appreciate!

  • #2
    that should be in the deflines
    >NODE_1_length_563_cov_201.866791

    201.866791 is the kmer coverage Ck (read kmers per contig kmers)

    if you know your average read length you can convert to get x-coverage (read bp per contig bp)

    Cx=Ck*L/(L-k+1)

    where k is your kmer setting and L is your read length
    so if I used a kmer of 37 and an average read length of 50
    Cx=202*50/(50-37+1)=721X

    if you want depth on a base pair granularity you are probably best off realigning and using Samtools
    --
    Jeremy Leipzig
    Bioinformatics Programmer
    --
    My blog
    Twitter

    Comment


    • #3
      Originally posted by genelab View Post
      I am now working with velvet denove assemly of my short reads data. The result directory contains the following files (contigs.fa Graph2 LastGraph Log PreGraph Roadmaps Sequences stats.txt velvet_asm.afg). I want to get the depth of coverage of each site on contig sequences contained in "contig.fa". Is the coverage information contained in "velvet_asm.afg" file?
      If I understand correctly, you want the depth of read coverage at each nucleotide position in the genome?

      Velvet does not produce this. However, the .afg file contains, for each read, where it is in a contig. You could use this to build a "coverage" report. I suspect AMOS might be able to do this for you. The software "Tablet" can load the .afg file and will draw the coverage for you too.

      Comment


      • #4
        Originally posted by Torst View Post
        If I understand correctly, you want the depth of read coverage at each nucleotide position in the genome?

        Velvet does not produce this. However, the .afg file contains, for each read, where it is in a contig. You could use this to build a "coverage" report. I suspect AMOS might be able to do this for you. The software "Tablet" can load the .afg file and will draw the coverage for you too.
        Yes, my purpose is to get the depth of read coverage at each nucleotide position in the contig generated by Velvet. Thanks for your great suggestion, I will try it.

        Comment


        • #5
          Originally posted by Zigster View Post
          that should be in the deflines
          >NODE_1_length_563_cov_201.866791

          201.866791 is the kmer coverage Ck (read kmers per contig kmers)

          if you know your average read length you can convert to get x-coverage (read bp per contig bp)

          Cx=Ck*L/(L-k+1)

          where k is your kmer setting and L is your read length
          so if I used a kmer of 37 and an average read length of 50
          Cx=202*50/(50-37+1)=721X

          if you want depth on a base pair granularity you are probably best off realigning and using Samtools

          Zigster,
          thanks for your great help!

          genelab

          Comment


          • #6
            Hi, all
            I am new to denovo genome assembly. I have a fastq sequence data which i have to assemble using velvet. I used the velvet optimiser script with different hash length from 27 to 41 and it predicted best to be 37. The output file contigs.fa contains 260 contigs whereas log file predicts 283 nodes, where are the rest gone? Length given in contigs.fa is in k mers? how do i calculate it's actual nucleotide length in bp?. How do i understand whether the assembly is good or bad. FInal stat given after script running:
            Final graph has 283 nodes and n50 of 347, max 2336, total 68614, using 19064/50000 reads
            Why are the number of used reads so low?

            Comment

            Working...
            X