Hi all, I've recently acquired a large and fairly complicated genome dataset. No one in my department has done something of this magnatude, so I was hoping for some direction. The genome is eukaryotic and estimated to be approximately human-sized (~3 GB). We had it sequenced on an Illumina Hi-Seq2000 with the following:
Lane 1: Individual A, 300-1000bp insert, 100bp paired-end reads.
Lane 2: Individual B, 300-1000bp insert, 100bp paired-end reads. Also, two mate-pair libraries were sequenced of this individual, same read length as the others, one with a 5kb-8kb insert, and one with a 8kb-15kb insert.
Lane 3: 50/50 mix of individual A and individual B, 300-1000bp insert, 100bp paired-end reads.
Total was about 1.6 billion reads that passed the Illumina filter. The two individuals refer to two separate DNA preps from 2 separate organisms of the same species, and these individuals are likely diploid. The estimated coverage is ~50x.
So, as one would probably imagine, I'm having trouble finding the best way to assemble this genome de novo that 1.) Retains polymorphisms (ambiguities) at regions of heterozygosity and 2.) Allows for some degree of scaffolding at highly repetitive regions.
It's my understanding that I can't use Allpaths LG (which would retain those ambiguities) because I don't have a short fragment library. Also, we will likely be upgrading our server to at least 500 GB RAM soon, in order to handle this data.
Any thoughts on how to tackle this? Thanks in advance.
Lane 1: Individual A, 300-1000bp insert, 100bp paired-end reads.
Lane 2: Individual B, 300-1000bp insert, 100bp paired-end reads. Also, two mate-pair libraries were sequenced of this individual, same read length as the others, one with a 5kb-8kb insert, and one with a 8kb-15kb insert.
Lane 3: 50/50 mix of individual A and individual B, 300-1000bp insert, 100bp paired-end reads.
Total was about 1.6 billion reads that passed the Illumina filter. The two individuals refer to two separate DNA preps from 2 separate organisms of the same species, and these individuals are likely diploid. The estimated coverage is ~50x.
So, as one would probably imagine, I'm having trouble finding the best way to assemble this genome de novo that 1.) Retains polymorphisms (ambiguities) at regions of heterozygosity and 2.) Allows for some degree of scaffolding at highly repetitive regions.
It's my understanding that I can't use Allpaths LG (which would retain those ambiguities) because I don't have a short fragment library. Also, we will likely be upgrading our server to at least 500 GB RAM soon, in order to handle this data.
Any thoughts on how to tackle this? Thanks in advance.
Comment