Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Simone78
    replied
    Originally posted by luc View Post
    Hello Simone,
    thanks for pointing me to your new transposase paper and for the interesting details in your post. Obviously the transposase will allow to start library prep from even lower RNA amounts and thus should likely be the method of choice for ultra-low input applications. Nevertheless the biases of the enzyme ( e.g. http://omicfrontiers.com/2013/07/04/...biased-genome/ [ please note the "pcr-free" in this comparison which is not realistic for low input RNA-seq]) and the usually wide insert size range of the libraries lead me to avoid it whenever possible. Thus refinements to the template-switching protocols remain of great interest to me.
    thanks for the link, very interesting read! Unfortunately, the bias introduces by the transposase is not the only one. We used KAPA HiFi DNA Pol both in the Smart-seq2 and the transposase papers in order to reduce the PCR bias (based on what reported by Quail et al., Nat Meth 2012), but when you work with single cells we need 2 rounds of PCR, which means that the bias just gets bigger.
    One thing I wanted to point out is that in the post you linked it says "The flaw in our analysis (of course) is that we should have used the Illumina Nextera XT kit for these samples as these are optimized for smaller genomes to minimise excessive numbers of smaller fragments". I believe, as I already said elsewhere in Seqanswer, that the enzyme is exactly the same for the standard and the XT kits (as I joke I always say that Illumina just bought Epicentre, repacked their enzyme and started selling for a few thousand USD but they didn´t make any effort to improve/change it) . What is different is just the buffer, as we also showed in our paper. Therefore, you need a two-buffer system to work with inputs that span from sub-pg to tens of ng DNA. Besides, varying the amount of PEG and the amount of enzyme will enable a better control of the size of the fragments (thus avoiding over-fragmentation). If you believe in coincidences (I don´t) the "long-fragments" and "short fragments" buffers that were used in the old Epicentre kit are the same used now in the Nextera and Nextera XT, respectively.

    Leave a comment:


  • luc
    replied
    Hello Simone,
    thanks for pointing me to your new transposase paper and for the interesting details in your post. Obviously the transposase will allow to start library prep from even lower RNA amounts and thus should likely be the method of choice for ultra-low input applications. Nevertheless the biases of the enzyme ( e.g. http://omicfrontiers.com/2013/07/04/...biased-genome/ [ please note the "pcr-free" in this comparison which is not realistic for low input RNA-seq]) and the usually wide insert size range of the libraries lead me to avoid it whenever possible. Thus refinements to the template-switching protocols remain of great interest to me.

    Leave a comment:


  • Simone78
    replied
    Even though there are some original and interesting ideas, I want to make several comments on things that are either not convincing or plainly wrong. Sorry, there is no order in that. I just wrote them down while reading the paper

    1 - you mention that Superscript II, SMARTscribe and SMART RT are the only ones giving detectable amounts of cDNA. Well, Superscript II and SMARTscribe are probably the same thing as we already showed in the Smart-seq2 paper (Picelli et al., Nat Methods 2013, Suppl Info).
    2 - When using Superscript III, 4 more pre-ampl cycles were needed. Also this is well documented in the literature (and also described in the Smart-seq2 paper). Mutations in Superscript II and III are different and the III has a negligible strand-switch activity. Nothing new here.
    3 - the method is not of any use for short circulating RNA given the huge amount of unmappable reads, as reported in the “Result” section.
    4 - Interesting observation about the bias in the template switching efficiency. You speculate that a TSO with different or modified 3´terminal could solve the problem. We already tested LNA-based TSO with “N” in 1, 2 or all of the 3´terminal bases (again, see Suppl Info). It turned out that 3 LNA-G bases are the best solution. If you don´t use LNA bases, then 3 rG are the best. Clontech had already thought about it, obviously!
    5 - Throughout the protocol it is repeatedly stated that one should cut the agarose and extract the samples ready for seq. This is a terrible (terrible!) troublesome, time-consuming, inefficient and non-scalable way of doing library prep. If you really want to do size selection why not using E-gels? Or PippinPrep if available?
    6 - Even though this method claims that only picograms of material are needed, you need hundreds ng/few ug of tot RNA to start with, due to losses in extraction, column purification, fragmentation, purification again…And then it can sequence all the RNA species…yes, but if I would simply do a Ribozero treatment (or a “home-brewed” version of it. There are some around) on as little as 10 ng + SMARTer (or SMART-seq2) I would achieve the same result with the same or less effort!
    7 - You always compared your method to others (for single cell seq) present on the market repeatedly saying that they are expensive and that they rely on an inefficient adaptor ligation step and so on (page 824). There are several inaccuracies in this statement. First, you can´t compare your method to those because yours is not designed for single cells. Second, it´s absolutely not true that all the methods rely on the (classic, ligase-based) adaptor ligation. Nextera (Illumina) and our recent method (Picelli et al., Genome Res 2014) don´t and they are efficient with ng (Nextera) or sub-pg (ours but also Adey et al., 2010) amounts of input DNA. And in the “Discussion” at the end of page 825 you refer to Ramsköld et al. (Nat Biotech 2012, ref #30) which is NOT based on any ligase but on the first Nextera kit from Illumina! Besides the cost of Smart-seq2+home-made Tn5 is 10-15 euros, comparable to the cost for your library prep.
    8 - in the discussion on page 825 in the same sentence there are several inaccuracies (a record!). It says that “(Nextera) requires at least 50 ng of DNA and, apparently, is restricted to the long DNA molecules. The full capacity of the tagmentation technique for DNA library prep is yet to be tested and compared with other methods”. First: the DNA Nextera XT kit is specifically designed to start from 1 ng input DNA (the old Nextera kit was using 50 ng). We have also shown in Picelli et al., Gen Res 2014 that you need as little as 0.1 pg DNA, but you should have just read Adey et al. 2010. Second: the tagmentation is NOT restricted to long molecules. In fact Adey and Shendure (the original Genome Biol 2010 paper where they describe the method) say that molecules as short as ca. 35 bp can be tagmented. Third, in the same paper they also compare the method to other standard fragmentation methods, showing that the Tn5 has just a weak preference for cutting the DNA at specific sites. Additionally there are also other papers on bisulfite-converted DNA prepped with the Tn5 (Adey et al. Genome Res 2012)…and even one from your Institution (!!!), Wang et al (Nat Prot 2013). So the Tn5-based approach is a viable option for exactly everything you claim not be good for.
    9 - You also state (claim 2 at the end of page 824) that there are no reports of “strand-specific mRNA transcriptome from 1 ng of polyA enriched RNA”, which is obviously inaccurate. Clontech has a protocol for FFPE samples where it couples Ribozero to a stranded SMARTer protocol and starts from as little as 10 ng of degraded TOTAL RNA (designed for FFPE samples), with no column purifications, fragmentation or other preparation steps needed. And 10 ng of tot RNA are in the same order of magnitude as 1 ng of polyA RNA you use in the paper.
    10 - Regarding the circulating DNA in the “Discussion” section. It is stated that the Thruplex kit (Rubicon Gen) is not capable to generate libraries from lower quantity of DNA. In principle this is correct, but you forgot to mention that the Picoplex (same company) allows the sequencing of even single CTCs.
    11 - You also claim that your method is better for sequencing circulating DNA compared to Tam-Seq because you sequence the whole genome and not only selected loci. However, the sotry is not that simple: I cite from the original Tam-Seq paper (Forshew et al., Sci Transl Med, 2012): “This generates a large amount of data on genomic regions that do not, at present, inform clinical decisions. Moreover, the depth of coverage for clinically significant loci is not sufficient to detect changes that occur at low frequency (<5%). Such approaches have recently been complemented by methods for examination of individual amplicons at great depth”. If just sequencing the whole genome would be that informative and cheap don´t you think it would have already been done?
    12 - It says in a previous post that “Under conditions of the described protocol (e.g. no Mn2+ ions), the terminal transferase activity of the RT enzyme is very limited. Most likely, it only adds 1 nucleotide to the 3'-terminus of the first cDNA strand before template switch occurs”. Sorry again, but we have reported (Nat Meth 2013, Suppl Info) that manganese chloride is NOT necessary for the template switch reaction to occur. Besides, even in the original SMARTer paper (Zhu et al., Biotechniques 2001) manganese chloride was not even mentioned.

    Leave a comment:


  • Asaf
    replied
    Some insights from our test nextseq run.
    We took some RNA (8 RNAseq libraries), prepared the libraries after RiboZero, fragmentation and size selection. The process was indeed shorter (1.5 days) and required less RNA.
    The nextseq results were disappointing, the multiplexing barcodes (i7) weren't read well for some reason, most of the barcodes were just poly-A. In addition, most of the reads contained poly-A which in some point in the read were changed to poly-G. I think that these issues are related to the fact that nextseq uses 2-colors system, A is both and G is neither and using some sort of normalization the software should determine which signal is too low to be an A and calls it as a G. Another issue that might arise is that there is a bias towards G in the first base of the read, this means that nextseq will have difficulties determining clusters (again - no color, and variability) which might lead to a small number of clusters (we got 100M reads instead of 400M, maybe it's this but maybe error in quantification).
    We are planning on doing another round, this time we will have the barcodes on the adapter before the GGG, we will use barcodes in different lengths so there won't be a G read on the entire flowcell, and use standard Illumina primers.
    Any more suggestions? Had anyone else tried this method with nextseq?

    Leave a comment:


  • HeidelbergScience
    replied
    An important update added to the protocol.
    Incubation with TSO for 2 hours (instead of 15 min) increases the yield of the library 5-10 times.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by sequencingfan View Post
    I also have a question.what about non template adding nucleotides by reverse transcriptase.is there possibility that Rt add no 3 C but for example CCG.Is there possibility to use degenerate TSO?
    Under conditions of the described protocol (e.g. no Mn2+ ions), the terminal transferase activity of the RT enzyme is very limited. Most likely, it only adds 1 nucleotide to the 3'-terminus of the first cDNA strand before template switch occurs. Enhancing TT activity of the RT (e.g. by adding Mn2+) produces mostly "empty" cDNAs libraries, because RT starts to tail also the poly(dT) primer.

    We are planning to test degenerative TSO, but they are likely to be inefficient, since RT adds predominantly Cs, and with much lower probability G, A,and Ts.

    Leave a comment:


  • Asaf
    replied
    Originally posted by HeidelbergScience View Post
    The bias to G-ending RNA templates is likely due to the fact that the template switching is facilitated when RT-product has terminal C. However, DNA templates have no 5-end bias, and, therefore, there could be other explanations (e.g. Mg2+ fragmentation and RNAses digestion produce more 5'-G... RNA fragments and less 5-A.... RNA).

    The number of G is always 3 (evident from Sanger-seq of the libraries from synthetic control RNA in the publication), and it is dictated by the 3xrG in the 3'-end of the TSO. When TSO has 4xrG, the products have 4 G (we did not put it in the paper though). Therefore, the protocol gives exact information when the RNA template starts.
    Thanks! We'll use this protocol for our next RNA library construction.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Asaf View Post
    This is a very nice protocol, thanks for sharing.
    I have two questions about the 5' ends of RNA libraries:
    1. Why is there a bias in the 5' towards G (and not A)
    2. Are there always 3 G's at the 5' end or there might be more which will add additional G's to the 5' end of the read? (this can be an answer to Q1). If I need to know exactly where the read (or RNA fragment) begins, is this protocol sensitive enough?
    Thanks
    The bias to G-ending RNA templates is likely due to the fact that the template switching is facilitated when RT-product has terminal C. However, DNA templates have no 5-end bias, and, therefore, there could be other explanations (e.g. Mg2+ fragmentation and RNAses digestion produce more 5'-G... RNA fragments and less 5-A.... RNA).

    The number of G is always 3 (evident from Sanger-seq of the libraries from synthetic control RNA in the publication), and it is dictated by the 3xrG in the 3'-end of the TSO. When TSO has 4xrG, the products have 4 G (we did not put it in the paper though). Therefore, the protocol gives exact information when the RNA template starts.
    Last edited by HeidelbergScience; 11-18-2014, 12:58 AM.

    Leave a comment:


  • sequencingfan
    replied
    I also have a question.what about non template adding nucleotides by reverse transcriptase.is there possibility that Rt add no 3 C but for example CCG.Is there possibility to use degenerate TSO?

    Leave a comment:


  • Asaf
    replied
    This is a very nice protocol, thanks for sharing.
    I have two questions about the 5' ends of RNA libraries:
    1. Why is there a bias in the 5' towards G (and not A)
    2. Are there always 3 G's at the 5' end or there might be more which will add additional G's to the 5' end of the read? (this can be an answer to Q1). If I need to know exactly where the read (or RNA fragment) begins, is this protocol sensitive enough?
    Thanks

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by sequencingfan View Post
    In the article You linked in first post there is Information that adaptor sequence is in the 3' end of dT oligo. Is that Right?
    The adaptor sequence is actually in the 5'-end of the dT oligo. However, after RT and PCR it will appear at the 3'-end of the "sense" strand in the final DNA library, and that' why we called it "3'-end adaptor" sequence.

    Leave a comment:


  • sequencingfan
    replied
    In the article You linked in first post there is Information that adaptor sequence is in the 3' end of dT oligo. Is that Right?

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by liron View Post
    Hi,

    I would like to try the protocol for library prep from total RNA, which requires RNA fragmentation. What would be the best way to cleanup the RNA after fragmentation , before the poly-adenylation step?
    We cleaned-up RNA with miRNeasy kit (Qiagen) with the addition of 20 mkg glycogen to enhance columns adsorption for low concentrated RNA. Briefly,
    fragmented RNA + 700 mkl Qiazol + 1 mkl glycogen (20mkg/mkl) → mix + 120mkl chloroform → mix → c/f 15 min 16000g → upper phase + 1,5V EtOH → column purification → elution in 30 mkl of water.
    However, simple RNA clean-up kits should suffice, and as long as your RNA is > 100 ng/mkl, there is no need to add glycogen
    Last edited by HeidelbergScience; 11-13-2014, 11:23 PM.

    Leave a comment:


  • liron
    replied
    RNA cleanup after fragmentation

    Hi,

    I would like to try the protocol for library prep from total RNA, which requires RNA fragmentation. What would be the best way to cleanup the RNA after fragmentation , before the poly-adenylation step?

    Leave a comment:


  • EvilTwin
    replied
    Hi sequencingfan,

    Thanks for information. Is there a special protocol for preparing libraries from cffDNA and is this protocol include shearing DNA after extraction or cffDNA is ready to use as it is (200 bp fragments ).?
    CATS can be directly used without additional preparations if your DNA is fragmented to ~ 150-200 bp after extraction, as is usual with plasma DNA.
    It also works with shorter fragments in case the cff DNA should be more fragmented.

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Essential Discoveries and Tools in Epitranscriptomics
    by seqadmin




    The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
    04-22-2024, 07:01 AM
  • seqadmin
    Current Approaches to Protein Sequencing
    by seqadmin


    Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
    04-04-2024, 04:25 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 04-25-2024, 11:49 AM
0 responses
19 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-24-2024, 08:47 AM
0 responses
17 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-11-2024, 12:08 PM
0 responses
62 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 10:19 PM
0 responses
60 views
0 likes
Last Post seqadmin  
Working...
X