Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Simon Anders
    replied
    Originally posted by zhengz View Post
    In cases of just a few replicates, certainly no way to check zero excess. But why use NB then (if this was what you meant to do)? Isn't there a simpler way?
    Not really. A lot of people assume a Poisson distribution (implicitly, by using Fisher's exact test) and then, for genes with high count rates, everything looks significant. You simply need to assess the biological variance (and for few replicates, you can only do so by pooling information over genes) and add it to the shot noise from the Poisson. NB is certainly the simplest way to to that.

    Simon

    Leave a comment:


  • malachig
    replied
    Dealing with 0 counts in comparisons across conditions was a very common topic of debate in the good old days of SAGE analysis. Perhaps there are some insights in that literature. Thankfully I didn't have to analyze that data but perhaps a SAGE aficionado will jump in here...

    Leave a comment:


  • zhengz
    replied
    It was microbiota composition data - histograms of each OTU (suppose this is the same as what you referred to as each gene) for groups of tens of samples. (OK, not RNA-seq data) But I think for people doing 16S rRNA, an R package with zero-inflated model would be useful.

    In cases of just a few replicates, certainly no way to check zero excess. But why use NB then (if this was what you meant to do)? Isn't there a simpler way?

    Leave a comment:


  • rnaseq
    replied
    Hi Simon Anders,

    Thank you very much for the insight and for clarifying things. I think you've addressed all the follow up questions I had about the zero-inflated model. My inclination was to report these genes as >300 (0 reads vs 300 reads). I agree it might be better to indicate the precision as well.

    I appreciate all the feedback.

    Leave a comment:


  • chqilin
    replied
    I am fascinated by the comments posted here. I would thank everybody who helps later when I have question about NGS, because you are the professionals.

    Leave a comment:


  • Simon Anders
    replied
    Originally posted by mrawlins View Post
    This and other methods for pair-wise comparison can be found in Bullard et al. (2010) in BMC Bioinformatics. I only tweaked one of their methods to get ours.
    The problem with this paper is that they don't discuss whether the compared methods control type-I error correctly, or to be more precise: The paper assumes that you want to test whether the expression strength in two samples is different, but usually, you rather want to test whether the change may be attributed to the difference in experimental treatment. (Ok, I know, I'm repeating myself.)

    Simon

    Leave a comment:


  • mrawlins
    replied
    What we've used for time-course models is doing a pair-wise comparison of each time point with the control (time 0) after quantile normalization. We've had a lot of success with doing a Fisher Exact Test for our pair-wise comparisons (it's faster to do a chi-square approximation of the FET, but it isn't quite as good). For Illumina and SOLiD reads the hypergeometric distribution that the FET is based on describes the data really well after quantile normalization. Before normalization it doesn't work as well, and more length bias remains. In some samples we've seen a complete elimination of the length bias using this method, FWIW. The method has no problems determining if a zero-count sample is different from an n-count sample.

    We also find this method needs some multiple hypothesis testing correction. We've had pretty good luck with the Bonferroni (Sidak) correction despite the fact it's really conservative. FDRs are commonly used in microarray data, and would probably work better than Bonferroni, but are a little more involved to calculate.

    This and other methods for pair-wise comparison can be found in Bullard et al. (2010) in BMC Bioinformatics. I only tweaked one of their methods to get ours.

    Leave a comment:


  • Simon Anders
    replied
    Hi

    Originally posted by zhengz View Post
    What about zero-inflated count models?

    I once checked the histograms of many variables and many did not look like follow Poisson distribution or negative binomial distribution, as there were excessive number of zoers.
    What kind of histogram did you look at?

    Just to make sure: If you take all the count values for the different genes in one sample and plot a histogram of these, there is no reason for it to look like NB, and many zeroes are totally expected.

    What I am talking about is the distribution of count values of one and the same gene in many different samples. This is what is expected to follow an NB. Unfortunately, this claim is made from theoretical reasoning, because no one has yet performed RNA-Seq experiments with hundreds of replicates, and with only a handful of numbers for each gene, you cannot estimate a distribution. Consequently, it would be hard to check whether there is an excess of zeroes, and hence, I suggest to assume there is not -- not only to keep thing simple but also because it is hard to see a mechanistic cause for a zero excess.

    Simon

    Leave a comment:


  • zhengz
    replied
    It would be nice to see some references addressing when to use zero-inflated NB and when ordinary NB.

    UCLA Statistical Computing is a very nice stitistics source (http://www.ats.ucla.edu/stat/stata/dae/zinb.htm).

    "Some Strategies You Might Be Tempted To Try
    Before we show how you can analyze this with a zero-inflated negative binomial analysis, let's consider some other methods that you might use.
    OLS Regression - You could try to analyze these data using OLS regression. However, count data are highly non-normal and are not well estimated by OLS regression.
    Zero-inflated Poisson Regression - Zero-inflated Poisson regression does better when the data is not overdispersed, i.e. variance much larger than the mean.
    Ordinary Count Models - Poisson or negative binomial models might be more appropriate if there are not excess zeros. "


    Hi Johnathon, unfortunately, I have not seen an R package for this. It may be "just fine" to use ordinary NB for some reviewers. I am definetely looking forward to seeing it be integreated in one of the open source packages. It would be even nicer to have the generlized estimated equation (GEE) or robust cluster variance function integreated with zero-inflated NB/Poisson models.

    Leave a comment:


  • Simon Anders
    replied
    You don't need a "zero-inflated count model". The negative binomial (NB) model used, e.g., by edgeR, DESeq, BaySeq, is just fine. "Zero-inflated" means that you see zeroes more often than an NB (or Poisson) distribution would predict. This is an issue in some applications of statistics but I don't think we need it here.

    For the question of reporting it: Well ,a fold change estimate of 300:0 is infinite, but if you don't like this, you can still say, it is ">300". In the end, what you want is some kind of confidence interval: If you have few counts, an observed count ratio of, say, 5:1, can be caused by a real expression ratio of anything from, say, 2:1 to 10:1, while, if the count ratio was 500:100, you can be way more sure that the estimate of 5:1 is close to the real value. Hence, a fold-change estimate should be supplemented by an indication of its precision (which, in RNA-Seq, would stringly depend on the number of reads the estimate is based on).

    There have been a few methods to get such confidence intervals for microarrays, but, to my knowledge, nothing is available yet for RNA-Seq data.

    Simon

    Leave a comment:


  • jdanderson
    replied
    Hello Zhengz,

    Thank you for your reply. I think part of the problem with my approach to this matter is my general lack of statistical savvy. Could you perhaps provide a good starting point or primer (or a useful open source tool) that could help explain this type of modeling to those of us who are naive in this area? "Zero-inflated count models" sounds very interesting.

    Cheers,
    Johnathon

    Leave a comment:


  • zhengz
    replied
    What about zero-inflated count models?

    I once checked the histograms of many variables and many did not look like follow Poisson distribution or negative binomial distribution, as there were excessive number of zoers.

    Leave a comment:


  • jdanderson
    replied
    Hello all,

    I am also very interested in everyone thinks about this as well. I've been debating on changing the zeros to ones to get some usable output, especially in cases where the denominator is zero (but not the numerator) because the output is undefined.

    Any suggestions?

    Cheers,
    Johnathon

    Leave a comment:


  • rnaseq
    replied
    Thanks for the reply lmf_bill!

    I am doing a time-course comparison and typically the raw reads under one condition are 0 and in another condition are about 300 or higher so I know that there is at least a 300 fold increase in expression which is significant. I was wondering if there was a way to report this (approximate) numerical value rather than binning it as larger or smaller.

    I appreciate any additional feedback!

    Leave a comment:


  • lmf_bill
    replied
    you should be careful of these genes. In my points, you do not need calculate the fold change. You can split these cases into two situations: one condition is larger or smaller than threshold, e.g. gene RPKM>=5 (one Nature paper uses this scale). For the smaller, it is nothing, while the larger is significant different.

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Advanced Tools Transforming the Field of Cytogenomics
    by seqadmin


    At the intersection of cytogenetics and genomics lies the exciting field of cytogenomics. It focuses on studying chromosomes at a molecular scale, involving techniques that analyze either the whole genome or particular DNA sequences to examine variations in structure and behavior at the chromosomal or subchromosomal level. By integrating cytogenetic techniques with genomic analysis, researchers can effectively investigate chromosomal abnormalities related to diseases, particularly...
    09-26-2023, 06:26 AM
  • seqadmin
    How RNA-Seq is Transforming Cancer Studies
    by seqadmin



    Cancer research has been transformed through numerous molecular techniques, with RNA sequencing (RNA-seq) playing a crucial role in understanding the complexity of the disease. Maša Ivin, Ph.D., Scientific Writer at Lexogen, and Yvonne Goepel Ph.D., Product Manager at Lexogen, remarked that “The high-throughput nature of RNA-seq allows for rapid profiling and deep exploration of the transcriptome.” They emphasized its indispensable role in cancer research, aiding in biomarker...
    09-07-2023, 11:15 PM
  • seqadmin
    Methods for Investigating the Transcriptome
    by seqadmin




    Ribonucleic acid (RNA) represents a range of diverse molecules that play a crucial role in many cellular processes. From serving as a protein template to regulating genes, the complex processes involving RNA make it a focal point of study for many scientists. This article will spotlight various methods scientists have developed to investigate different RNA subtypes and the broader transcriptome.

    Whole Transcriptome RNA-seq
    Whole transcriptome sequencing...
    08-31-2023, 11:07 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Yesterday, 06:57 AM
0 responses
10 views
0 likes
Last Post seqadmin  
Started by seqadmin, 09-26-2023, 07:53 AM
0 responses
10 views
0 likes
Last Post seqadmin  
Started by seqadmin, 09-25-2023, 07:42 AM
0 responses
15 views
0 likes
Last Post seqadmin  
Started by seqadmin, 09-22-2023, 09:05 AM
0 responses
45 views
0 likes
Last Post seqadmin  
Working...
X