Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bioinformatics cores in a world of cloud computing and genome factories

    I'm a director of a new bioinformatics core and I just wanted to start a general discussion here to hear others' thoughts.

    What's the role of a bioinformatics core in a world of cloud computing and ultra-economy-of-scale genome factories (e.g. BGI, Complete Genomics)?

    Cloud Computing

    A recent article in GenomeWeb discusses Ion Torrent's (Life Technology) plans to enter the commercial bioinformatics arena, and their plans on launching a cloud-based variant analysis software package. (Article is paywalled, but free if you register with a .edu address).

    Montana-based Golden Helix and Durham-based Expression analysis have teamed up to offer a service-based cloud computing solution for next-generation sequencing bioinformatics analysis. (BusinessWire, GenomeWeb, GoldenHelix Webcast).

    There are others competing in the commercial bioinformatics arena that I'm surely leaving off.

    Genome factories

    I had meetings with reps from both BGI and Complete Genomics last week.

    CG offers human whole-genome sequencing bundled with a very sophisticated bioinformatics pipeline that returns annotated variants, mapped reads with a local de novo assembly around variants, summary statistics, and summaries of evidence around called variants, indels, etc. Their price and value added by the bundled bioinformatics will be difficult to beat.

    The BGI rep spoke of the cost reduction and throughput that they have with their massive economy of scale operation. Their sequencing services (DNA, RNA, epigenetics) all include bundled bioinformatics services.

    What does a bioinformatics core need to do to thrive?

    What does a core need to do to thrive when competing with other vendors that bundle bioinformatics costs with cheap sequencing? It may be too early to tell whether cloud computing services like Ion Reporter or GoldenHelix/ExpressionAnalysis will add any significant value to a sequencing service provider's offering, but these may also become a significant challenge in the near future.

  • #2
    I don't see these developments as anything different to technologies or analyses which have matured in the past. New technologies start off unreliable and everyone is experimenting with them trying to figure out how to handle them. This stage requires a lot of input from informatics people and often everyone is running their own hardware and is handling the data and processing in house.

    Somewhere down the line things mature and the analysis becomes standardised. As the technology matures there's less of a risk in passing out your samples to a service provider and the small in-house facilities tend to diminish. This happened in the past with Sanger sequencing and Microarrays. Many array services included bioinformatics analysis as part of the package, but the core facilities didn't all shut down.

    From the core bioinformatics facility point of view I don't see this as a problem or a challenge to us. If other people want to take on some of the uncontentious analysis for us (sequencing, base calling and mapping for example), then that just means we can devote more time to the experiment specific analysis which inevitably follows. The close connection of having an on-site bioinformatician working on your project is a big advantage, and when you're developing specific analyses for an experiment the detachment of using a commercial service can be a big hindrance.

    Also, whilst some technologies are maturing there's always something new coming in to replace them. You could fill a warehouse full of Hi-Seqs and start a commercial sequencing service (which is pretty much what BGI did), but since then you're getting things like PacBio and possibly Oxford Nanopore which will start the whole cycle off again. Even with existing platforms there's always some new type of experiment coming along where there isn't a pre-built pipeline (does anyone offer a commercial HiC analysis pipeline yet?), which will give the in-house informatics groups plenty to work on.

    The point of a core facility is to be responsive to the current needs of its users. To justify their existence they need to stay one step ahead of the big commercial offerings which, by their nature, take some time to produce robust pipelines they can run mostly unattended. Since science mostly happens on the leading edge of any technology then core facilities should always be able to find their place.

    Comment


    • #3
      Student

      Things change rapidly. Hang on. Don't miss the beacon of light.

      Comment


      • #4
        I actually don't think most bioinformatics is that outsourceable. Sure some low level stuff is. if you envision a core facility as a place that hands out lists of differentially expressed genes, peaks, QTLs, etc. by simply running them through well documented data analysis pipelines, then yes, core facilities like this will become useless. On the other hand if a core facility stays at the cutting edge, has people that can communicate well with wet-lab biologists, understand the wet-lab the biological questions, then you will only become more useful as time passes. It's science. If you keep doing what you are doing today, tomorrow you'll be a relic of the past.
        --------------
        Ethan

        Comment


        • #5
          Originally posted by simonandrews View Post
          I don't see these developments as anything different to technologies or analyses which have matured in the past. New technologies start off unreliable and everyone is experimenting with them trying to figure out how to handle them. This stage requires a lot of input from informatics people and often everyone is running their own hardware and is handling the data and processing in house.

          Somewhere down the line things mature and the analysis becomes standardised. As the technology matures there's less of a risk in passing out your samples to a service provider and the small in-house facilities tend to diminish. This happened in the past with Sanger sequencing and Microarrays. Many array services included bioinformatics analysis as part of the package, but the core facilities didn't all shut down.

          From the core bioinformatics facility point of view I don't see this as a problem or a challenge to us. If other people want to take on some of the uncontentious analysis for us (sequencing, base calling and mapping for example), then that just means we can devote more time to the experiment specific analysis which inevitably follows. The close connection of having an on-site bioinformatician working on your project is a big advantage, and when you're developing specific analyses for an experiment the detachment of using a commercial service can be a big hindrance.

          Also, whilst some technologies are maturing there's always something new coming in to replace them. You could fill a warehouse full of Hi-Seqs and start a commercial sequencing service (which is pretty much what BGI did), but since then you're getting things like PacBio and possibly Oxford Nanopore which will start the whole cycle off again. Even with existing platforms there's always some new type of experiment coming along where there isn't a pre-built pipeline (does anyone offer a commercial HiC analysis pipeline yet?), which will give the in-house informatics groups plenty to work on.

          The point of a core facility is to be responsive to the current needs of its users. To justify their existence they need to stay one step ahead of the big commercial offerings which, by their nature, take some time to produce robust pipelines they can run mostly unattended. Since science mostly happens on the leading edge of any technology then core facilities should always be able to find their place.
          This is an interesting perspective. As a member of Golden Helix, we often differentiate our analytic services as being significantly faster, more collaborative, and more dedicated to valid scientific results than other bioinformatics core services. If we don't show this in both our dedication to the process and the results, we go out of business and everyone goes home.

          From my point of view, a successful bioinformatics core needs to contain the capacity and expertise to both handle advanced analysis projects and to advise on how scientists can take the reigns of their own projects if appropriate. Standardized analysis should be passed onto those most familiar with their work because they are likely to glean greater insight from the process than your every day bioinformatician. This could be done on a platform that allows a simple way scientists to perform the analysis and for the core members to audit their work if things start to wander off the beaten path (which as we know, happens frequently). Non-standard analysis should be owned by a statistical or bioinformatics consulting group, and projects should be approached with an effort to understand as much about biology in place as much as anything else given that the statistical/bioinformatics expertise is already in place within the consulting group. The more interactive the collaboration, the more rich the experience and results.

          Lastly, this should be done quickly. It takes a very keen industrial engineer to properly design and manage the process of analysis services. And I can tell you, mapping in load distribution that incorporates end users is a very difficult thing to put on a value-stream map, but the results are inspiring.

          Comment

          Latest Articles

          Collapse

          • seqadmin
            Recent Advances in Sequencing Technologies
            by seqadmin







            Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

            Long-Read Sequencing
            Long-read sequencing has...
            12-02-2024, 01:49 PM
          • seqadmin
            Genetic Variation in Immunogenetics and Antibody Diversity
            by seqadmin



            The field of immunogenetics explores how genetic variations influence immune responses and susceptibility to disease. In a recent SEQanswers webinar, Oscar Rodriguez, Ph.D., Postdoctoral Researcher at the University of Louisville, and Ruben Martínez Barricarte, Ph.D., Assistant Professor of Medicine at Vanderbilt University, shared recent advancements in immunogenetics. This article discusses their research on genetic variation in antibody loci, antibody production processes,...
            11-06-2024, 07:24 PM

          ad_right_rmr

          Collapse

          News

          Collapse

          Topics Statistics Last Post
          Started by seqadmin, 12-02-2024, 09:29 AM
          0 responses
          137 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 12-02-2024, 09:06 AM
          0 responses
          48 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 12-02-2024, 08:03 AM
          0 responses
          38 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 11-22-2024, 07:36 AM
          0 responses
          69 views
          0 likes
          Last Post seqadmin  
          Working...
          X