Header Leaderboard Ad


Short Read Archive Canned



No announcement yet.
  • Filter
  • Time
  • Show
Clear All
new posts

  • Short Read Archive Canned

    More details here:

    Where will you submit your data now?

  • #2

    Taking on the ad hoc centralization of such an important shared database resource at the forefront of such an important developing scientific field and then just dropping it from open access sight is the pits! NCBI, I am pointing fingers at you. At the very least, local, institutional science libraries and infrastructure should have been primed to develop their own SRA capacities, (and then staffed by their own employees of course) as your SRA curation was clearly not founded upon reliable scientific funding committments.


    • #3
      A great idea for a community undertaking...


      • #4
        Wow, didn't realize we're reposting an anonymous comment on a blog...


        • #5
          I did wonder about that myself but then decided who has the time to fake up official NCBI communication?

          But anyhow, I've had independent confirmation from several sources that it is true.


          • #6
            According to the email it will be around for some months yet ...

            Not clear as yet what will happen to the already submitted data.


            • #7
              If you think about it rationally, there's no way you can have a centralised single resource for sequence data volumes which are doubling every year or so.


              • #8
                Why not? I can't easily find how much data is in the SRA as of now...

                It might be expensive to do from scratch, but it's a type of effort that with the right pitch, someone like Google could be persuaded to host. For humanitarian reasons and the tax write off.


                • #9
                  OK, it's *possible*. But it's going to be very expensive.

                  There's the networking costs / limits to think about as well as storage.

                  Amazon might be a good choice to step in! A great way of attracting people to their cloud computing services.

                  Of course there needs to be some degree of replication so we are not dependent on a single organisation.


                  • #10
                    Right. If all the data is IN amazon...the worldwide bandwidth req's are much lower if you're using amazon's tools.


                    • #11
                      Another possibility to consider would be to only share certain variation files. But that is dependent on what defines variants and how variants are characterized and is sort of confined to DNA topics. For expression level data, perhaps some standardized format could come along as well.


                      • #12
                        PacBio should fold it into their mega New Biology thingy.


                        • #13
                          Originally posted by nickloman View Post
                          If you think about it rationally, there's no way you can have a centralised single resource for sequence data volumes which are doubling every year or so.
                          Doubling would be okay. That is close enough to Moore's law that investments of the same amount of money per year in storage would suffice. The problem is that next gen sequencing is expanding at hyper-Moore's law rates. See:


                          (Figure 1)

                          Around 2005-2006, you see an inflection point. Before that point, Moore's law roughly kept pace with sequence cost. But since then (at least at the Broad) the semi-log slope tips downward for sequencing. That means you need to exponentially increase your expenditures on sequence storage if you plan to spend the same amount on sequencing. Alternatively you can come up with specialized storage solutions, etc.

                          But, ultimately one of two things happens:

                          (1) Front-end computational cost de facto limits the drop in sequencing costs -- at which point sequencing costs lock at Moore's law rates.

                          (2) "Sequencing" reaches fruition -- reading DNA sequences costs no more than storing them. Congratulations your new storage medium is DNA.

                          Last edited by pmiguel; 02-14-2011, 11:32 AM. Reason: typo


                          • #14
                            Big Bams and Bit Torrents

                            Perhaps using a subset of the bit torrent protocol might be an answer. I guess there would have to be a "you must have served up half as much as you've downloaded" rule or something to prevent getting but not giving.

                            Security's a beach.
                            Last edited by Richard Finney; 02-14-2011, 12:26 PM.


                            • #15
                              It's kind of funny how the Science articles about data deluge basically precipitated this announcement. There's been a lot of blog-o-sphere buzz about data deluge and more than a couple of them mentioning off-hand SRA and its attempt to handle it.

                              So far this is a rumor. It happens to be a very believable rumor given the funding issue and ever-increasing need for storage, but let's not say it's canned before we're sure.

                              I think while the intent of SRA was good, the execution was not. Anyone who's dealt with it can tell you how much extra work getting data into their formats and uploading it was, not to mention the effort involved in retrieving data from it.

                              It's also just not a very sustainable thing for the government to sponsor this way. Transferring giant data sets through the net is time and bandwidth consuming, not to mention the upkeep of an ever-expanding storage space.

                              All that said, I don't like the whole "cloud" solution very much either. The major reason is the lack of control over privacy. At the very least, SRA did a good job protecting privacy (although their mechanism for doing so was quite clunky). Storing personal genetic data on a computer system owned by a third party simply does not sit well with me. It's kind of a funny idea to be "sharing" personal genetic data anyway, but at the very least, attempts to protect privacy need to be made and it's hard to envision how that's accomplished when the data itself is on a third party computer.

                              Perhaps a Biotorrent type solution is the best way to share this type of data. Something that can be reasonably secure while not consuming massive bandwidth on both ends.

                              I'm also not convinced about simply sharing variants. While it's true that it will save a lot of storage space, variants are not inherently comparable. Sequencing platform plays a role, but even more significant are the significant improvements in alignment and variant detection over the past few years. Realign and re-call variants on the Watson genome and I bet you'll end up with vastly different numbers from what were reported, for example. But if you just have the variants, you can't realign and recall, and therefore you can't really use that data for a true comparison.

                              I proposed in a recent blog article that someone should try to create a project where all the world's public sequence data is kept continually updated to modern standards. Would it be expensive? You betcha. But it would also be a very powerful resource while also avoiding the whole shoe-horning problem that SRA ran into with its formatting issues.
                              Mendelian Disorder: A blogshare of random useful information for general public consumption. [Blog]
                              Breakway: A Program to Identify Structural Variations in Genomic Data [Website] [Forum Post]
                              Projects: U87MG whole genome sequence [Website] [Paper]


                              Latest Articles


                              • seqadmin
                                A Brief Overview and Common Challenges in Single-cell Sequencing Analysis
                                by seqadmin

                                ​​​​​​The introduction of single-cell sequencing has advanced the ability to study cell-to-cell heterogeneity. Its use has improved our understanding of somatic mutations1, cell lineages2, cellular diversity and regulation3, and development in multicellular organisms4. Single-cell sequencing encompasses hundreds of techniques with different approaches to studying the genomes, transcriptomes, epigenomes, and other omics of individual cells. The analysis of single-cell sequencing data i...

                                01-24-2023, 01:19 PM
                              • seqadmin
                                Introduction to Single-Cell Sequencing
                                by seqadmin
                                Single-cell sequencing is a technique used to investigate the genome, transcriptome, epigenome, and other omics of individual cells using high-throughput sequencing. This technology has provided many scientific breakthroughs and continues to be applied across many fields, including microbiology, oncology, immunology, neurobiology, precision medicine, and stem cell research.

                                The advancement of single-cell sequencing began in 2009 when Tang et al. investigated the single-cell transcriptomes
                                01-09-2023, 03:10 PM