Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jparsons
    replied
    Originally posted by lpachter View Post
    It's important to note the limitations of raw-count methods, but has anyone done anything to check to see if any of the isoform-detection algorithms can actually discriminate between isoforms well enough to assign those counts properly? I've seen simulated data that showed RSEM incapable of reproducing 'truth' half the time with even simple 2-isoform mixes.

    Cufflinks' model in figure 2 has three times more counts than figure 1 and doesn't differentiate anywhere near as cleanly between isoforms. Surely maximum-likelihood count assignment can be incorrect, too, given ambiguous reads? Looking at the supplementals, however, I'm inclined to accept that it may be incorrect less often than raw counts when dealing with real data.

    Leave a comment:


  • lpachter
    replied
    Originally posted by Simon Anders View Post

    If I don't care about isoforms or think that my coverage is too low to distinguish isoforms anyway, I expect to get optimal power by simply summing everything up.
    Please see Figure 1 of http://www.nature.com/nbt/journal/va...0.html#/figure

    Originally posted by Simon Anders View Post

    Cuffdiff is, as I understand it, designed to deal with such issues, while our approach ignores them. I expect that DESeq, in compensation for being unsuitable to detect differences in isoform proportions as in your example, achieves much better detection power for differences in total expression (per gene, summing over isoforms), especially at very low counts.
    Please see Figure 3 of


    Originally posted by Simon Anders View Post

    As I am not clear on how biological noise is taken into account by cuffdiff I cannot be fully sure whether this expectation will hold (and I'm quite curious to learn more about cuffdiff once your paper is out).
    Please see

    Leave a comment:


  • carmeyeii
    replied
    Originally posted by lpachter View Post
    Thats correct- the procedure RCJ suggests will give you an estimate of the actual tag count for each transcript.

    Is this to say that if one sums up the (FPKM * length in kb * reads mapped in millions ) of each transcript in a gene, one would obtain the total *estimated* read count for that gene?

    But this has to be done individually for each transcript and then grouped into a gene, right?


    Carmen

    Leave a comment:


  • dpryan
    replied
    It's been a while since I've done it, but if you google "cluster optimal group number" you can find methods for gap calculations and other things to find an optimal cluster number. I recall there being R packages for a lot of this stuff, such as the cluster package.

    Leave a comment:


  • sikidiri
    replied
    Hello Steven,
    Thanks for your answer. But how to decide about the threshold to make these categories based upon expression is my main problem. Do you think any statistical tests would help me. Any paper/example would help me understand this better.
    Thanks again.

    Leave a comment:


  • steven
    replied
    Originally posted by sikidiri View Post
    Hello All,
    I have a pre processed mRNA-seq data for hg19 genome, in which for each gene the RPKM value is calculated. The value ranges from 99960 to zero. I have just one sample.
    I want to categorise these genes into highly, medium and weakly expressed genes.
    What could be the best way to do it?
    Your suggestions would be highly appreciated.
    Thanks a lot.
    As RPKM is a normalized expression measurement so you can in theory directly compare values between genes within the same sample -keeping in mind a couple of reported biases like the size of the gene, GC content, etc.

    I would first sort the values and use percentiles ("tiers") to define categories with a similar population and inspect the threshold values.
    You may also want to consider absolute thresholds (like "RPKM<1", "1<RPKM<10" and "10<RPKM") but I do not know if there are "standards" for such values and I actually doubt that it is in practice reasonable to use values obtained from different protocols/conditions/software/etc..

    Leave a comment:


  • sikidiri
    replied
    Method to categorize mRNA-seq data based upon expression value

    Hello All,
    I have a pre processed mRNA-seq data for hg19 genome, in which for each gene the RPKM value is calculated. The value ranges from 99960 to zero. I have just one sample.
    I want to categorise these genes into highly, medium and weakly expressed genes.
    What could be the best way to do it?
    Your suggestions would be highly appreciated.
    Thanks a lot.

    Leave a comment:


  • marcora
    replied
    Originally posted by syambmed View Post
    Hi guys,
    I have trancriptome data from Illumina and am using CLC Genomic workbench for data analysis. I dont know or not familiar with other programs for transcriptome analysis. the data are from 1 sample of control cells and 1 sample of treated cells (no replicate for each sample) and I am looking for differently express genes.
    If I was a reviewer I would doubt any conclusion coming from an experiment with no biological replicates. Anyhow, DESeq allows for such design, you may wanna consider it. I am not familiar with CLC Genomic workbench.

    Leave a comment:


  • syambmed
    replied
    Hi guys,

    I have trancriptome data from Illumina and am using CLC Genomic workbench for data analysis. I dont know or not familiar with other programs for transcriptome analysis. the data are from 1 sample of control cells and 1 sample of treated cells (no replicate for each sample) and I am looking for differently express genes.

    The problem is normalization step. There are 3 types of normalization method offered by the software 1) scaling [option for normalization value= mean or median, baseline = median mean or median median] 2) quantile 3) total reads per 1million.

    I dont know which one to choose..T_T Help me..

    Then there are statistical tests on Gaussion data or on proportions. How to know that my data is suitable for which test..? I read that mostly people use Baggerley's.

    A thing with Baggerley test is that the test outcome have p-value and false discovery rate (FDR) p-value correction. which one is used for determining differentially express genes..?

    Thank you.

    Leave a comment:


  • zee
    replied
    I would go with uniquely mapped reads because it's a more accurate representation of how much sequence data you obtained from your runs.
    You could get a bit more stringent by using Picard to filter out possible PCR duplicates from the alignments in BAM format.

    Leave a comment:


  • hypatia
    replied
    normalization with all or uniquely map reads

    Hi Zee
    I was wondering If you got the answer to this question. Is it 3??
    should I elminate the unaligned or ambiguous maps out of the normalization?


    "I've read about people doing counts as reads per million and log transforming these values to fit Poisson distribution, but it's sprung multiple ideas in my mind. Would this be as simple as dividing my counts for each experiment by
    1) 1 Million
    2) the total number of reads sequenced
    3) the total number of uniquely mapped reads

    I'm inclined to option (3) because that represents the amount of usable sequence data."

    Leave a comment:


  • answersseq
    replied
    I do learn a lot from discussion here.
    Any opinions on microRNA sequencing data? Their length is similar but many reads can be mapped to multiple locations (or mature miRs).
    To compare differential expression between cell lines, tissue, I guess we would expect big difference, as well as no house keeping miRs..


    Originally posted by Simon Anders View Post
    In case this got lost in my lengthy post #12:

    The reason why raw counts are advantageous to FPKM values for statistical inference is discussed in this thread, from post #6 onwards: http://seqanswers.com/forums/showthread.php?t=4349

    Leave a comment:


  • cek
    replied
    I order to use DESeq to test for differential expression between my RNAseq conditions, I calculate raw read counts by transcript using cufflinks output with the following formulae (as proposed by RockChalkJayhawk) :
    raw1 = FPKM * length (kb) * number of mapped reads (million).

    However, on another seqanswers post (http://seqanswers.com/forums/showthr...links+coverage), Cole Trapnell suggests to calculate raw read counts like this :
    raw2 = coverage * length (from transcripts.expr file)

    However, these two calculations do not lead to the same result. Have someone notice the same difference in their data ?
    Last edited by cek; 05-12-2010, 02:18 AM.

    Leave a comment:


  • Siva
    replied
    Originally posted by chrisbala View Post
    yep, seems that strand needs to be +/- for HT-Seq, but Cufflinks produces some transcripts without strand info (which seems reasonable?)
    Hi Chris
    Yes, if you use GTF file from Cufflinks, you should set --stranded=no in htseqcount. Cufflinks does not give you strand information in all cases.

    thanks
    Siva

    Leave a comment:


  • chrisbala
    replied
    yep, seems that strand needs to be +/- for HT-Seq, but Cufflinks produces some transcripts without strand info (which seems reasonable?)

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Strategies for Sequencing Challenging Samples
    by seqadmin


    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
    03-22-2024, 06:39 AM
  • seqadmin
    Techniques and Challenges in Conservation Genomics
    by seqadmin



    The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

    Avian Conservation
    Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
    03-08-2024, 10:41 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 03-27-2024, 06:37 PM
0 responses
13 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-27-2024, 06:07 PM
0 responses
11 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-22-2024, 10:03 AM
0 responses
53 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-21-2024, 07:32 AM
0 responses
69 views
0 likes
Last Post seqadmin  
Working...
X