Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • lh3
    replied

    Leave a comment:


  • srasdk
    replied
    1. The observation is general and not precise. It was done while working with the recent 1000 Genomes data. Sorry - the statement was too broad. Homo sapiens is a very inbred species and any personal germline genome is very close to the reference. Somatic changes (cancer) are more unpredictable. If you are interested in in biological diversity other then human, then you were right in confronting the statement, but currently NGS data production is heavily tilted toward human.

    2. Before NGS phylogenetic studies were submitted as final and annotated sequences to GenBank. When community becomes confident that it can extract most of the information from raw data, there will be no point of archiving intermediate results.

    Leave a comment:


  • Joann
    replied
    seq more information

    Originally posted by srasdk View Post
    In resequencing ~90% of raw sequences perfectly match the reference.
    1. Will you supply bibliographic citation for this observation?

    Originally posted by srasdk View Post
    When this cost will be low enough to make archival economically senseless, it will stop.
    2. What will do for comparative biological/phylogenetic studies?

    Thanks.

    Leave a comment:


  • srasdk
    replied
    Many torrents (especially "Pirate Bay"-style) do prioritize distributors over downloaders. This is not a technical problem.
    Total cost of sequencing needs to include not only reagents, but sample collection and preparation, lab technician time, reproducibility of a sample, time to repeat experiment, etc. When this cost will be low enough to make archival economically senseless, it will stop. Compare it to "blood work": if physician doesn't believe in red cell counts, it is much cheaper to redo the test than to figure out what went wrong.

    Leave a comment:


  • aleferna
    replied
    Its just a matter of implementing a system where you give a higher priority to people that help distribute the files, that should be enough to get a wide base of nodes. Regarding compression, it will only get you so far. Think of a world in which sequencing a chromatin mark or a TF is $100, beware big data is comming...

    Leave a comment:


  • srasdk
    replied
    As previously mentioned, torrents are as good as the number of sites willing to seed. NGS crowd is not that big yet. But if you use seqanswers as a measure of how fast it grows - torrent may become a feasible option in the future. Add to that universities, hospitals, federal agencies other then NIH who are beginnig to wake up to archival of NGS - they do have enough resources to seed the data.

    Fedex solution while making a good joke is impractical. This argument is second only to "use fridge instead of NetApp".

    And about compression. Look at compression not as a general "gzip fastq", but what real information do you really store. In resequencing ~90% of raw sequences perfectly match the reference. In functional sequencing you frequently end up with highly repetitive reads. Do you really care about pixel coordinates of every read? Do you really trust machine quality scores or recalibration approach is more then sufficient? etc...
    Last edited by srasdk; 09-27-2011, 08:08 PM. Reason: thumb-down icon was selected by accident

    Leave a comment:


  • Richard Finney
    replied
    There's always "petabyte sneakernet" : http://www.codinghorror.com/blog/200...bandwidth.html

    Maybe the Netflix model of a few years ago is the answer; instead of DVDs in the mail, computer cases with 8 bays of 3TB drives using "high bandwidth" Fedex trucks.

    Leave a comment:


  • aleferna
    replied
    Its also easy to "encourage" people to help with the process, if you want to download from the main repository, you have to work for it. This will encourage universities to setup a server with a mirror. The NCBI can keep track of who is helping a lot and who is just leeching the system, leechers are put on a low priority. You could even spur companies that could charge for passwords to their mirrors...

    Leave a comment:


  • aleferna
    replied
    well I'm just not sure centralized system is the way to go, you need a centralized indexer, but the storage should be decentralized. Storing 100PB of data is easy and "cheep", distributing 100PB of data from a single location, that is extremely difficult. In particular I'm guessing that data is not randomly accessed, I'm guessing that data from the last 6mo's is accessed more frequently than data that is 10 years old, so if you can distribute the "hottest, most downloaded data" you can release the download overload from the main repository quite easily.

    Leave a comment:


  • Richard Finney
    replied
    I still like the concept of NCBI being "pirate bay". RRRrrr, mateys! Given how locked down the really interesting data is ... might as well treat it like "illegal music".

    Leave a comment:


  • aleferna
    replied
    Originally posted by laura View Post
    Setting up torrents for projects as they are released, think about the IT overhead on that one

    torrents don't work for this sort of data because there are very few people willing and able to seed the torrents unlike with legal software torrents or illegal film/tv/music torrents

    Compression solutions are being actively looked into and are likely to be the best idea for long term sustainability
    Well, first I think that can be made automatic quite easily so I really don't see the IT overload. Second, this has never been tested therefore you don't know for sure if it doesn't work. So the question is, sequencer output is growing exponentially, last I checked, compression efficiency barely make the linear scale? I still don't think compression is the way to go.

    Leave a comment:


  • laura
    replied
    Setting up torrents for projects as they are released, think about the IT overhead on that one

    torrents don't work for this sort of data because there are very few people willing and able to seed the torrents unlike with legal software torrents or illegal film/tv/music torrents

    Compression solutions are being actively looked into and are likely to be the best idea for long term sustainability

    Leave a comment:


  • aleferna
    replied
    Torrents don't work..

    well torrents don't work for SRA? well, not so sure, the thing is that, maybe somebody from IT from SRA could answer this, but I think that when a paper gets out then people want to download the data and look at it creating a massive download peak. If this is the case then the torrent would minimize the overload. So you could have a single server with a not so great download speed for random access for archiving and then a torrent comunity to spread the data arround and remove peaks. The question is do you have peak's of downloads of the same datasets? Also you could have mirrors deployed at different institutions, I mean, a 50TB server is less than 50k, you mount a torrent mirror in 100 universities and and you got yourself a nice reduntant service. I think concentrating all data in the same spot is becomes exponentially expensive, you can get a much better scalable distributed system for a fraction of the cost. MiSeq and IonTorrent are comming next year, and we need a way to distribute this stuff. Look for a NGS paper and you will oftenly find amazing statistical creativity in the handling of the data, we need to SRA to keep these "creative people" honest.

    Leave a comment:


  • Joann
    replied
    Interim reprieve for SRA

    NCBI announcing SRA open for new data til October. Will post info links ASAP.

    Last edited by Joann; 07-14-2011, 07:03 AM. Reason: link to news article dated 7/14/11

    Leave a comment:


  • laura
    replied
    Torrents don't work when most of the time there are less than 50 people who want to the data sets

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Essential Discoveries and Tools in Epitranscriptomics
    by seqadmin




    The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
    04-22-2024, 07:01 AM
  • seqadmin
    Current Approaches to Protein Sequencing
    by seqadmin


    Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
    04-04-2024, 04:25 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Yesterday, 11:49 AM
0 responses
15 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-24-2024, 08:47 AM
0 responses
16 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-11-2024, 12:08 PM
0 responses
61 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 10:19 PM
0 responses
60 views
0 likes
Last Post seqadmin  
Working...
X