Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Simone78
    replied
    Originally posted by HeidelbergScience View Post

    Also, in [Picelli et al, 2013] maintext we found one solid statement that “exchanging only a single guanylate for a locked nucleic acid (LNA)11 guanylate at the TSO 3′ end (rGrG+G) led to a twofold increase in cDNA yield relative to that obtained with the SMARTer IIA oligo”. Could you tell how big was the drop in library yield when you used a N-base at the end, as you stated above?
    I can´t remember this detail off the top of my head, but I would say just slightly worse (a noticeable difference but not huge). All comparisons (Smarter2 vs the Smarter kit) can be found in the Suppl material of the Nat Methods paper.
    Suppl table 4 has a list of all the comparisons with details in what they are different between each other.
    Suppl table 3 has the detail of all the single cells analyzed with each protocol (sheet A) as well as the statistical analysis of the different variables (sheet B), where we looked not only at the TSO but also PCR enzyme, additives, etc etc.
    Suppl Figure 1 and 2 give you an idea of the difference in the sensitivity and variability of different protocols.
    In short, the rGrG+G oligo is better than rGrG+N, which is anyway better than the TSO from the kit.

    Leave a comment:


  • Simone78
    replied
    Originally posted by HeidelbergScience View Post

    Secondly, in the main text and supplementary material in [Picelli et al, Nat Met // 2013] we did not find any written comments on the different TS capacities of various RT enzymes. Albeit from the supplementary table it is indeed possible to infer that both Smartscribe and SSRTII secures much higher yield of final DNA library as compared to SSRTIII, there is no information about other three enzymes tested in our paper.
    We did not investigate all the enzymes systematically, although in "Suppl table 1" we reported trials with SSRTII, SSRTIII, SmartScribe, Revertaid H-, Revertaid Premium and Maxima H-. Both from the table (difficult to find, I agree!) and other papers is clear that the TS activity, measured by looking at the cDNA yield, must be negligible. A recent example where SSRT II was compared to III can be found in a recent paper from the Linnarsson group (Zajac P et al., PLoS One 2013). The reason for al the discrepancies in the literature is, however, still a mistery to me...

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    12 - It says in a previous post that “Under conditions of the described protocol (e.g. no Mn2+ ions), the terminal transferase activity of the RT enzyme is very limited. Most likely, it only adds 1 nucleotide to the 3'-terminus of the first cDNA strand before template switch occurs”. Sorry again, but we have reported (Nat Meth 2013, Suppl Info) that manganese chloride is NOT necessary for the template switch reaction to occur. Besides, even in the original SMARTer paper (Zhu et al., Biotechniques 2001) manganese chloride was not even mentioned.
    Interestingly, our data on the contrary indicates that Mn2+ enhances template switching event, presumably by enhancing the dC-terminal transferases activity on MMLV. However, at the same time it also dramatically increased the occurrence of “empty” DNA libraries because RT reverse primer was also highly polyC-tailed. Indeed, it a well-known fact that Mn2+ enhances the terminal dC-transferase activity of MMLV RT [e.g. Schmidt WM and Mueller MW // Nucl Acids Res 1999 showed that after addition of Mn2+ the number of terminally added nucleotides increased from 1 to 3+)

    Unlike in CATS, in SMART-seq the TS at the end of the RNA templates may not be a rate limiting step; since during SMART the most template switching events are likely to occur before the MMLV RT reaches the end of the mRNA (and starts to add dCs). Albeit only a speculation, but this could be the reason why you did not observe the increase after Mn2+ addition.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    11 - You also claim that your method is better for sequencing circulating DNA compared to Tam-Seq because you sequence the whole genome and not only selected loci. However, the sotry is not that simple: I cite from the original Tam-Seq paper (Forshew et al., Sci Transl Med, 2012): “This generates a large amount of data on genomic regions that do not, at present, inform clinical decisions. Moreover, the depth of coverage for clinically significant loci is not sufficient to detect changes that occur at low frequency (<5%). Such approaches have recently been complemented by methods for examination of individual amplicons at great depth”. If just sequencing the whole genome would be that informative and cheap don´t you think it would have already been done?
    That most regions in the human genome are not very informative in terms of clinically relevant loci is well-known fact, we appreciate the useful citation supplied. Moreover your statement is absolutely true when performing a whole-genome sequencing experiment to study well-known disease-associated loci in well characterized material types, for example the most commonly mutated regions in certain tumors, etc.

    TAM-Seq works with low amounts of input material due to the efficiency of the PCR reaction, but coming with the price that one can only obtain information about those specific loci. While this is very successful and wanted in certain types of experiments/studies, it might not be suitable for others. For example, it could lead to considerable risks to miss important information (panel bias) if an informative region was not previously recognized as such and is not included as a target region, or if the amplicons do not include all of the relevant nucleotides due to primer design restrictions.

    Since circulating DNA properties (origin, fragment sizes, etc.) are quite different from those of the cellular DNA that is usually studied, one cannot yet know if the informative regions are identical. For example, the mere presence of a fragment from an otherwise “uninteresting” region of the genome could prove to highly informative.

    With very low input requirements CATS enables to study the whole genome if a researcher wants to do so for reasons of her/his own, while not being restricted to certain pre-defined regions. Many researches do not work in a clinical or human context and/or work with input materials whose properties are not well characterized (such as nucleic acids from plasma or other non-standard sources) and thus do not know the loci that are relevant to their respective contexts. For example, one could explore and identify relevant loci with CATS and perform TAM-Seq on the identified regions.

    Another obstacle to using amplicon-based techniques is the fact that they require fragments which are large enough for both PCR primer sites to be present, usually well over 100 bp . We find the majority of circulating DNA fragments is < 100 bp, and amplicon-based technologies do not perform well or at all.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    10 - Regarding the circulating DNA in the “Discussion” section. It is stated that the Thruplex kit (Rubicon Gen) is not capable to generate libraries from lower quantity of DNA. In principle this is correct, but you forgot to mention that the Picoplex (same company) allows the sequencing of even single CTCs.
    Almost any library generation method, including common adaptors-ligation kits, would allow sequencing of a single-cell DNA provided the whole-genome enrichment step is done successfully and in a complexity-conserving manner. As pointed out already above, we have developed CATS as a way to construct NGS-platform-specific libraries that requires the smallest possible amount of input material at the start of the actual library construction as compared to other methods. To our knowledge, Thruplex technology does not allow DNA-seq from single cell without whole-genome pre-amplification. Also a new (launched in Feb 2014) single Cell Library Preparation for Illumina PicoPLEX™ DNA-seq kit contains the whole-genome pre-amplification step.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    9 - You also state (claim 2 at the end of page 824) that there are no reports of “strand-specific mRNA transcriptome from 1 ng of polyA enriched RNA”, which is obviously inaccurate. Clontech has a protocol for FFPE samples where it couples Ribozero to a stranded SMARTer protocol and starts from as little as 10 ng of degraded TOTAL RNA (designed for FFPE samples), with no column purifications, fragmentation or other preparation steps needed. And 10 ng of tot RNA are in the same order of magnitude as 1 ng of polyA RNA you use in the paper.
    Our statement is correct, because Ribozero-treated RNA and polyA-enriched RNA is not the same. The paper underlying the stranded RNA-seq kit from Clontech was cited in our paper as well [Langevin et al, 2013 RNA Biol].

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    8 - in the discussion on page 825 in the same sentence there are several inaccuracies (a record!). It says that “(Nextera) requires at least 50 ng of DNA and, apparently, is restricted to the long DNA molecules. The full capacity of the tagmentation technique for DNA library prep is yet to be tested and compared with other methods”. First: the DNA Nextera XT kit is specifically designed to start from 1 ng input DNA (the old Nextera kit was using 50 ng). We have also shown in Picelli et al., Gen Res 2014 that you need as little as 0.1 pg DNA, but you should have just read Adey et al. 2010. Second: the tagmentation is NOT restricted to long molecules. In fact Adey and Shendure (the original Genome Biol 2010 paper where they describe the method) say that molecules as short as ca. 35 bp can be tagmented. Third, in the same paper they also compare the method to other standard fragmentation methods, showing that the Tn5 has just a weak preference for cutting the DNA at specific sites. Additionally there are also other papers on bisulfite-converted DNA prepped with the Tn5 (Adey et al. Genome Res 2012)…and even one from your Institution (!!!), Wang et al (Nat Prot 2013). So the Tn5-based approach is a viable option for exactly everything you claim not be good for.
    At the time of the preparation of our manuscript we did not found scientific reports describing the application of the DNA Nextera XT kit. Isn’t it quite new? The manual of DNA Nextera kit stated that DNA amount should be not less than 50 ng and the length >2000 bp. So, we suppose that the statement in the paper was correct, albeit already outdated.

    Although we cited only one paper on Tn5, our statement that “The full capacity of the tagmentation technique for DNA library prep is yet to be tested and compared with other methods” is correct because none of the papers described application Tn5 for DNA-seq of low amounts of fragmented circulating DNA.

    Importantly, Tn5-based method are apparently restricted to dsDNA of >300 bp (inferred form Nextera XT manual), and might not be efficient for very fragmented (<150 bp) circulating DNA. Again, we are not claiming that it is impossible; we just saying that “it has to be tested and compared”. Despite the fact that tagmentation may occur on short (<150 bp) dsDNA, the complexity of such libraries has to be demonstrated.

    Finally, the library preparation workflow of the published Tn5 methods for bisilfiteDNA-seq is significantly more labor and (at the moment) cost intensive as compared to CATS. However, we certainly agree that tagmentation is a very elegant and promising method, especially as compared to adaptors-ligation.
    Last edited by HeidelbergScience; 01-06-2015, 08:03 AM.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    7 - You always compared your method to others (for single cell seq) present on the market repeatedly saying that they are expensive and that they rely on an inefficient adaptor ligation step and so on (page 824). There are several inaccuracies in this statement. First, you can´t compare your method to those because yours is not designed for single cells. Second, it´s absolutely not true that all the methods rely on the (classic, ligase-based) adaptor ligation. Nextera (Illumina) and our recent method (Picelli et al., Genome Res 2014) don´t and they are efficient with ng (Nextera) or sub-pg (ours but also Adey et al., 2010) amounts of input DNA. And in the “Discussion” at the end of page 825 you refer to Ramsköld et al. (Nat Biotech 2012, ref #30) which is NOT based on any ligase but on the first Nextera kit from Illumina! Besides the cost of Smart-seq2+home-made Tn5 is 10-15 euros, comparable to the cost for your library prep.
    This comment encompasses many different aspects:

    While it is true that CATS was not specifically applied for single cell analysis yet, we definitely can compare CATS with the methods for single-cell RNA-seq.

    Thus, any NGS library prep method can be subdivided into: (1) RNA/DNA sample pre-amplification/enrichment step (e.g. SMART enriches for mRNA, random priming can amplify RNA and DNA on whole-genome level, etc) and (2) construction of NGS-platform-specific library. The pre-amplification/enrichment step (#1) may not be necessary if you have enough input material. (also pre-amplification of very short or degraded RNA and DNA such as from blood plasma is not possible). There are 3 general ways to perform step (#2), which all have in common that adapters of known sequences are attached to random fragments of unknown sequences:

    (a) Adaptors ligation,
    (b) Taqmentation (Tn5)
    (c) CATS.

    So, in principle, one can substitute tagmentation with fragmentation + CATS (in fact, it could increase the complexity of the final library and be more cheap/fast). Therefore, CATS is a suitable step (#2) during RNA/DNA-seq from single cells as well, provided that whole-genome pre-amplification of the low RNA/DNA input is successfully done by other means (e.g. random priming, SMARTer, etc).

    As stated, our goal was a protocol that requires the smallest possible amount of input material at the start of the actual library construction. Since a single cell contains approx. 10-30 pg of total RNA, CATS can be applied directly after fragmentation w/o preliminary whole-transcirptome pre-amplification. Indeed, we have shown in the paper that 5 pg of 22 nt RNA gives a clean library.

    About pricing: Commercial kits are expensive mainly due to the fact that they include compensations for significant R&D expenses and requires certain ROI. Therefore, almost any home-made kit will cost drastically less. However, to obtain Tn5 in-house one need to produce and purify it from mammalian cells, if we understood correctly? While this might be possible at some institutions, CATS would be a simpler and a cheaper way where this is not possible. Or, is there is already a cheap commercial provider of Tn5?

    About recently emerged tagmentation technique: Back at the end of 2013 when we were preparing the manuscript, the Tn5 method was only available as a part of a single-on-the-market Nextera kit. Albeit is definitely very elegant and promising technique, to our knowledge it is still not widely applied. Therefore, our statement that the widely used methods for construction of NGS-platform-specific library are based on adaptors-ligation was correct.

    Finally [Ramsköld et al. // Nat Biotech 2012] describes utilization of Clontech SMARTer kit with adaptors-ligation step besides the utilization of Nextera taqmentation step – “We used the amplified cDNA to construct standard Illumina sequencing libraries using either Covaris shearing followed by ligation of adaptors (PE) or Tn5-mediated 'tagmentation' using the Nextera technology (Tn5)”. So it was cited correctly.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    6 - Even though this method claims that only picograms of material are needed, you need hundreds ng/few ug of tot RNA to start with, due to losses in extraction, column purification, fragmentation, purification again…And then it can sequence all the RNA species…yes, but if I would simply do a Ribozero treatment (or a “home-brewed” version of it. There are some around) on as little as 10 ng + SMARTer (or SMART-seq2) I would achieve the same result with the same or less effort!
    One can use 10 ng of total RNA input, enrich it for mRNA via poly(dT) magnetic beads and fragment with Mg2+, yielding approx. 100-300 pg of 20-100 bp RNA in 40 µl eluate. The important thing here is to add 10 µg of glycogen as a co-precipitant during clean-up step (with miRNAeasy kit). In our hands, the whole procedure of sample preparation (before polyA-tailing) takes about 1 hour. Subsequently one could set-up a poly(A) reaction in 50 µl and, than concentrate the whole 50 µl (e.g. via Zymo columns or EtOH precipitation). Even 10 pg of fragmented RNA is already enough for CATS (the protocol posted on this forum) to generate high complexity libraries for mRNA-seq. There are also many other options to fragment RNA (including thermosensitive RNAses) w/o the need to subsequently purify and concentrate the sample.

    However, if we speak about mRNA-seq from ultra-low (1-100) number of cells, then SMART-seq is probably the most convenient way (and Ribozero would not be even necessary). However, most researchers are working with much higher number of cells (e.g. growing on 96 - 24well plates), from where obtaining 100-1000 ng of RNA is not a problem. After ~30 min procedure of poly(A)-enrichment, one can get 3 – 30 ng of mRNA in 40 µl eluate, and run CATS even without a need to concentrate the polyadenylated product before RT. Also, unlike SMART-seq, CATS gives (1) strand-specific information about mRNA and (2) has even coverage along all mRNAs. While:

    (1) SMART-based mRNA-seq is not strand specific.

    (2) with SMART-seq 5’-proximal and 3’-proximal parts of mRNAs are likely to be significantly underrepresented due to the inevitable premature template switching and taqmentation bias. Please correct us if we are wrong.

    (3) SMART-seq is limited only to mRNA-sequencing; while CATS allows any RNA-seq, including small (20-200nt) RNA, like RIP-samples, miRNA, piRNA etc, and also any DNA-seq. So it is a universal protocol.

    (4) SMART-seq would actually require more efforts because “RT and template switch” there is used only to generate and pre-amplify long cDNAs from mRNA. The library preparation itself occurs afterwards via fragmentation/adaptors ligation or taqmentation and further pre-amplification + purification. It is much easier to do mRNA enrichment (30 min), fragmentation/cleanup (20 min) and CATS (4-5 hours total, 20 min hands-on time).

    To summarize, if you have a few cells and require only mRNA-seq than SMART-seq is the probably the best option. However, one can still convert mRNA from a single-cell into cDNA using poly(dT) primers, and run CATS after genome-wide DNA pre-amplification till hundred picograms.
    Last edited by HeidelbergScience; 01-06-2015, 07:57 AM.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    5 - Throughout the protocol it is repeatedly stated that one should cut the agarose and extract the samples ready for seq. This is a terrible (terrible!) troublesome, time-consuming, inefficient and non-scalable way of doing library prep. If you really want to do size selection why not using E-gels? Or PippinPrep if available?
    In fact, gel purification step is completely optional. Thus, in Fig2 of our paper Sanger-seq and Bioanalyser demonstrate that the purity of CATS libraries after the column purification and gel extraction step are equal (except the remaining of the pre-amp primers which Qiaquick columns cannot efficiently remove). So you need only to remove pre-amp primers by magnetic beads (e.g. Ampipure) before NGS and can completely skip the gel-extraction step.

    We usually cut our libraries from E-gels, as it is - in our opinion - more convenient than magnetic bead or column purification, and also gives us the information about the library peaks distribution. This also enables to skip the – in our opinion - inconvenient Bioanalyser step and only use Qubit after library purification from E-gel. Of course, any researcher should decide on his own whether to purify or which purification method is best suited to his experimental needs or preferences concerning time-efficiency, scalability, etc.

    In fact, CATS is the only method which allows NGS of small (<5ng) amounts of short RNA(DNA) without gel-purification. All other methods that we are aware of are associated with ligation of adaptors and thus inevitably yield high percentages of “empty libraries” and self-ligated adaptors which cannot be separated by magnetic beads from template containing libraries.
    Last edited by HeidelbergScience; 01-06-2015, 06:57 AM.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    4 - Interesting observation about the bias in the template switching efficiency. You speculate that a TSO with different or modified 3´terminal could solve the problem. We already tested LNA-based TSO with “N” in 1, 2 or all of the 3´terminal bases (again, see Suppl Info). It turned out that 3 LNA-G bases are the best solution. If you don´t use LNA bases, then 3 rG are the best. Clontech had already thought about it, obviously!
    The bias occurred only in RNA, but not DNA templates. Therefore we cannot be 100% sure whether it derived from the template switching preference to 5’-rG, or a bias towards generation of 5’-rG templates after Mg2+ RNA fragmentation. We will address it in our follow-up work, and also test if degenerative bases reduce this bias.

    Also, in [Picelli et al, 2013] maintext we found one solid statement that “exchanging only a single guanylate for a locked nucleic acid (LNA)11 guanylate at the TSO 3′ end (rGrG+G) led to a twofold increase in cDNA yield relative to that obtained with the SMARTer IIA oligo”. Could you tell how big was the drop in library yield when you used a N-base at the end, as you stated above?

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    3 - the method is not of any use for short circulating RNA given the huge amount of unmappable reads, as reported in the “Result” section.
    Thanks for giving us an opportunity to comment on that. The library prepared from 100pg of plasma RNA which were shown in the paper represents the true circulating RNA content in the sample. Relatively high % of un-mappable reads in this run could be caused by significant % of non-human RNA in plasma or/and using sub-optimal RNA mapper.

    However, we have extensively addressed the problem in the meantime and currently our plasma RNA-seq runs yield much better mapping statistics towards human genome/transcriptome (e.g. >80% can be unambiguously mapped to human genome using an alternative RNA mapping database) and indicates that the plasma RNA runs described in the manuscript were not yet an accurate representation of the full capacity of the technique.

    The CATS protocol itself does not create any “visible” amounts of irrelevant libraries/by-products. Thus, libraries prepared from only 5pg of synthetic cel-miR-39 consisted only of fragments carrying cel-miR-39 as was evident from “clean” Sanger-seq chromatograms (Fig.2 in the paper). Moreover, there was no signal in negative controls prepared without adding nucleic acids templates.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    1 - you mention that Superscript II, SMARTscribe and SMART RT are the only ones giving detectable amounts of cDNA. Well, Superscript II and SMARTscribe are probably the same thing as we already showed in the Smart-seq2 paper (Picelli et al., Nat Methods 2013, Suppl Info).
    2 - When using Superscript III, 4 more pre-ampl cycles were needed. Also this is well documented in the literature (and also described in the Smart-seq2 paper). Mutations in Superscript II and III are different and the III has a negligible strand-switch activity. Nothing new here.
    The goal of this manuscript and the described experiments was not to claim “a new discovery” of different template switching (TS) capacities of MMLV RT mutants. The experiment was important to demonstrate which commercial MMLV RTs can be used for the TS in CATS protocol. We tested the 6 most widely used commercial RT enzymes, but it was neither the goal nor a topic of our publication to extensively research on all of their properties. We also cannot agree that the phenomenon of different TS capacity is well documented in the literature. For most commercial RT enzymes this information is simply not present.

    Secondly, in the main text and supplementary material in [Picelli et al, Nat Met // 2013] we did not find any written comments on the different TS capacities of various RT enzymes. Albeit from the supplementary table it is indeed possible to infer that both Smartscribe and SSRTII secures much higher yield of final DNA library as compared to SSRTIII, there is no information about other three enzymes tested in our paper.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Simone78 View Post
    Even though there are some original and interesting ideas, I want to make several comments on things that are either not convincing or plainly wrong. Sorry, there is no order in that. I just wrote them down while reading the paper
    Thank you for investing your time and commenting on our article. We appreciate your effort and think that it would be fair from our side to answer to your comments here. We apologize for not responding earlier since your comment was posted during the Christmas holidays and we have been on vacation until today.
    Please find below a point-by-point response. We hope that this information would be helpful for the readers to better perceive the important advantages and fundamental differences between current NGS library prep methods.

    Leave a comment:


  • HeidelbergScience
    replied
    Originally posted by Asaf View Post
    Some insights from our test nextseq run.
    We took some RNA (8 RNAseq libraries), prepared the libraries after RiboZero, fragmentation and size selection. The process was indeed shorter (1.5 days) and required less RNA.
    The nextseq results were disappointing, the multiplexing barcodes (i7) weren't read well for some reason, most of the barcodes were just poly-A. In addition, most of the reads contained poly-A which in some point in the read were changed to poly-G. I think that these issues are related to the fact that nextseq uses 2-colors system, A is both and G is neither and using some sort of normalization the software should determine which signal is too low to be an A and calls it as a G. Another issue that might arise is that there is a bias towards G in the first base of the read, this means that nextseq will have difficulties determining clusters (again - no color, and variability) which might lead to a small number of clusters (we got 100M reads instead of 400M, maybe it's this but maybe error in quantification).
    We are planning on doing another round, this time we will have the barcodes on the adapter before the GGG, we will use barcodes in different lengths so there won't be a G read on the entire flowcell, and use standard Illumina primers.
    Any more suggestions? Had anyone else tried this method with nextseq?
    Many thanks for your feedback.
    Albeit we never run HiSeq after CATS (only MiSeqs), we have observed the similar problem with the P7 barcode reads. So far we made only 2x MiSeq runs using P7 barcodes (one with 8 and one with 4 different barcodes). So, the one with 8 had barcodes messed up, however, the run with 4 worked well. Also, we noticed that the quality of the Index read was much lower as compared to Read1. We first though this is the problem of our primers or MiSeq, but after your post we suspect that the proximity of 30xdA-tail to the P7 index might have caused it. However, we can not find any feasible explanation of how the polyA-tail could interfere. The best option would be to introduce the bar-codes into TSO oligo before the GGG together with making them of different length. However, one can also try shorter 20xdA tails in the reverse primer, or "dilute" the tail with dG and dC nucleotides between each 10xdT (so RT primer wold be XXXXXttttttttttgttttttttttcttttttttttV). We will test those strategies and update the protocol.

    The fact that most of the reads contained poly(A) is normal, as long as they are not “empty”. You need to use the trimming algorithm to remove the polyA before mapping of course. The only way to evade the poly(A) tails after each read is either by using longer RNA templates or shorter Read1 length. We never cared about it, since the presence of polyA does not bring any trouble. We did not observe changing of A to G though (at least at the majority of the reads), but again we never run HiSeq.

    Regarding clusters. Despite the fact that in RNA runs 80% of the reads starts with G (due to the bias), we have only a slight drop in number of clusters passing the filter as compared to DNA runs (which does not have G bias). For example with DNA runs we usually see 90% of clusters passed the filter, while for Mg2+ fragmented RNA it is about 80%. It might happen that for HiSeq the G bias creates bigger problem. Actually, 400M reads is a “theoretical” maximum which you can achieve on Hi200 and it is for paired-end reads (for single read it it 200 respectively). I guess you used single reads, right? So, 100M is definitely ok, if you did not have too many clusters. Could you specify what was the cluster density and how many clusters has passed the filter? This info would help to determine whether G bias interfered in the quality of Read1.

    Also, you mentioned that the process took 1,5 days. Did it include the initial preparation of all solutions, primers, Ampipure/Gel isolation and final QC? What was the hands-on time of the library prep? Also, how much DNA did you use?
    Again many thanks for the feedback, it was very helpful.
    Last edited by HeidelbergScience; 01-06-2015, 06:38 AM.

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Essential Discoveries and Tools in Epitranscriptomics
    by seqadmin




    The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
    04-22-2024, 07:01 AM
  • seqadmin
    Current Approaches to Protein Sequencing
    by seqadmin


    Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
    04-04-2024, 04:25 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 04-25-2024, 11:49 AM
0 responses
19 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-24-2024, 08:47 AM
0 responses
18 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-11-2024, 12:08 PM
0 responses
62 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 10:19 PM
0 responses
60 views
0 likes
Last Post seqadmin  
Working...
X