Greetings,
for the past year, I've been involved in a project aiming to study the effects that Artificial Reproduction Techniques (ART) have on bovine embryos. Amongst the many experiments that have been carried out was the 454 sequencing of a library of embryonic cDNA. The goal of this sequencing experiment was to identify novel embryo specific transcripts, and thus the library was normalized before being sent for sequencing, making it impossible to evaluate expression levels, but increasing our coverage of rare transcripts.
To characterize our sequencing data, I built a custom pipeline, using (Nagaraj, SH., 2009) as an inspiration. The pipeline goes:
However, as I've come to read more and more of the litterature on the subject, I started asking myself two questions:
I would greatly appreciate any insights regarding these two questions, or anything related to my analysis.
Thank you,
-Eric Fournier
for the past year, I've been involved in a project aiming to study the effects that Artificial Reproduction Techniques (ART) have on bovine embryos. Amongst the many experiments that have been carried out was the 454 sequencing of a library of embryonic cDNA. The goal of this sequencing experiment was to identify novel embryo specific transcripts, and thus the library was normalized before being sent for sequencing, making it impossible to evaluate expression levels, but increasing our coverage of rare transcripts.
To characterize our sequencing data, I built a custom pipeline, using (Nagaraj, SH., 2009) as an inspiration. The pipeline goes:
- Proprietary read cleaning by an external consultant (Removal of vectors, adapters and low quality reads, etc.)
- Masking using RepeatMasker
- Assembly using CAP3
- Mapping of the contigs and singletons to the reference genome using BLAT
- Initial analysis of the alignment using Perl scripts
- Additional characterization using BLASTP and hmmscan on long ORFs.
However, as I've come to read more and more of the litterature on the subject, I started asking myself two questions:
- Is it relevant/good practice to mask repetitive elements out of reads prior to assembly? I was under the impression that this would increase the quality of the assembly by preventing erroneous concatenation of reads coming from different transcripts but bearing the same repeated elements. However, this does not seem to be standard practice.
- When a reference genome is available, what are the advantages and disadvantages of mapping all reads to the genome rather than doing a de novo assembly of the transcriptome? Assembling first seemed the natural procedure to me, since full length contig are much easier to unambiguously align to the genome than single reads, which may contain errors or be chimaeric in nature.
I would greatly appreciate any insights regarding these two questions, or anything related to my analysis.
Thank you,
-Eric Fournier