I'm just starting on a new time series dataset (last one for the phd!). There's something odd in happening with the dispersion estimates though. It's something in my data doing this rather than anything I'm doing with DESeq, as something similar happens when I transform the data using Voom and look at the mean-variance trend.
Wondering if anyone might know what would cause the estimated dispersions to split as the normalized mean increases? I've attached the plot that was the result of the code below.
The voom code:
I do have one sample, that even after normalisation is almost twice the size of all the other libraries. I'm not sure why, I'm looking into it, but I'm not sure that that would cause this weird forking of the variance.
Any thoughts would be much appreciated.
Ben.
Wondering if anyone might know what would cause the estimated dispersions to split as the normalized mean increases? I've attached the plot that was the result of the code below.
Code:
dds <- DESeqDataSetFromMatrix(counts,colData = colData, ~ condition + time + condition:time) dds <- DESeq(dds, test="LRT",reduced = ~ condition + time) plotDispEsts(dds, main="Modelling dispersion for gene shrinkage")
Code:
numberOfGenes<-length(counts[,1]) time=c(1:length(design$time)) time=as.factor(c(rep(time,each=3))) condition<-as.factor(c(rep(design$condition,each=3))) modelDesign <- model.matrix(~0 +time ) dge <- DGEList(counts=counts) dge <- calcNormFactors(dge,method="TMM") v <- voom(dge,modelDesign,plot=TRUE)
Any thoughts would be much appreciated.
Ben.