Hello all,
Most bioinformatics researchers get stuck with the question of how to buy a computer with enough RAM to process their NGS data, because RAM is very expensive. It is not easy to get approval for buying a $100K computer from the managers, when they think everything can be done using $1K laptop and Microsoft excel.
Internet companies like Google developed algorithms to process terabytes and petabytes of data very rapidly and give users the search results. They use clusters of commodity computers with inexpensive disks (hard drive is cheap) using an approach called MapReduce. MapReduce framework is available for free under Hadoop framework distributed by Apache foundation.
Few months back, I came across a genome assembly program called 'contrail' that uses Hadoop to assemble large quantities of NGS data, and it is scalable. When I speak to bioinformaticians about trying out Hadoop instead of buying large and expensive RAM-based machine, I usually hit a hard wall, because the words like Hadoop, MapReduce etc. are foreign to them. So, today I wrote a post to explain setting up and run contrail on your own machine using Hadoop. It is written in such a way that even if you never used Hadoop etc., you can mechanically execute the steps and will be able to assemble the reads in test library in a short time in your own Windows or Unix box. I am hoping that once researchers start to feel that Hadoop approach is easy and scalable for large data sets, they will be able to develop their own programs and the whole community will benefit.
This post discusses how to use contrail assembler -
This post discusses how to set up and run Hadoop for a simple sequence analysis example -
Please note that I am not associated with the researchers, who wrote contrail, and never spoke to them or met them. It is the only example I found for de Bruijn assemblers and decided to try it out.
Most bioinformatics researchers get stuck with the question of how to buy a computer with enough RAM to process their NGS data, because RAM is very expensive. It is not easy to get approval for buying a $100K computer from the managers, when they think everything can be done using $1K laptop and Microsoft excel.
Internet companies like Google developed algorithms to process terabytes and petabytes of data very rapidly and give users the search results. They use clusters of commodity computers with inexpensive disks (hard drive is cheap) using an approach called MapReduce. MapReduce framework is available for free under Hadoop framework distributed by Apache foundation.
Few months back, I came across a genome assembly program called 'contrail' that uses Hadoop to assemble large quantities of NGS data, and it is scalable. When I speak to bioinformaticians about trying out Hadoop instead of buying large and expensive RAM-based machine, I usually hit a hard wall, because the words like Hadoop, MapReduce etc. are foreign to them. So, today I wrote a post to explain setting up and run contrail on your own machine using Hadoop. It is written in such a way that even if you never used Hadoop etc., you can mechanically execute the steps and will be able to assemble the reads in test library in a short time in your own Windows or Unix box. I am hoping that once researchers start to feel that Hadoop approach is easy and scalable for large data sets, they will be able to develop their own programs and the whole community will benefit.
This post discusses how to use contrail assembler -
This post discusses how to set up and run Hadoop for a simple sequence analysis example -
Please note that I am not associated with the researchers, who wrote contrail, and never spoke to them or met them. It is the only example I found for de Bruijn assemblers and decided to try it out.