I'm using a small data set to test Velvet-Oases of about a million (av. length 200bp) reads. Pretty soon we're going to get a much larger data set, but the memory requirements for the much smaller assembly is crazy.
When I turn on read-tracking to run Oases with 15G of memory it will max out and kill at about 60-70% nodes visited. How would this ever work with 10-100 million reads if the largest-memory instance I can create has about 70G of memory?
When I turn on read-tracking to run Oases with 15G of memory it will max out and kill at about 60-70% nodes visited. How would this ever work with 10-100 million reads if the largest-memory instance I can create has about 70G of memory?