No announcement yet.
  • Filter
  • Time
  • Show
Clear All
new posts

  • DESeq V DESeq2


    I get alot more DE genes when I analyse my RNA Seq dataset using the current version of DESeq versus DESeq2 -just using the default settings for both.

    Its a simple design of 10 disease V 10 control samples but I get an extra ~1500 genes identified as significantly DE at an FDR of 5% using DESeq2.

    I know DESeq2 has more powerful statistic etc but is this great of a difference likely or could I be doing something wrong...(more likely!) .

    I'd appreciated any advice!


  • #2
    This is actually interesting. Would you mind sending me your count table? (Blank out gene names and obscure sample names if your data is confidential.)


    • #3
      Yes that seems interesting to analyze. 1500 extra DE genes seems a lot.. I suppose you used the same count matrix as input ?


      • #4
        Thanks Simon -I've sent you the count table via email if that's OK.

        Yep its the same count matrix.


        • #5
          I am actually seeing something quite similar.
          Using the same counts table, DESeq2 is giving vastly more differentially expressed genes than DESeq (at FDR 0.05).

          (3 biological rep vs 3 biological rep comparisons)

          Set1: 5469 DEG with DESeq, 13307 DEG with DESeq2
          Set2: 15123 DEG with DESeq, 22127 DEG with DESeq2
          Set3: 3818 DEG with DESeq, 10221 DEG with DESeq2

          That just seems a bit odd to me.
          (Using R 3.0.1)


          • #6
            Same thing for me, getting A LOT of more significantly different genes (tried with 2 different projects). Though I assumed that was intended.


            • #7
              Well, yes, it is intended. DESeq had a rather ugly hack to ensure type-I error control, which cost a lot of power, and fixing this was the core aim of the development of DESeq2.

              It's only that for the datasets we used to test it on the improvement was not as dramatic - hence my surprise. However, we used designed experiments with cell cultures to test, and we expect that the improvement will be more pronounced for experiments involving patient data with different genotype, which show stronger within-group variability. So, I guess this is all as it should be, and our improvement was just really needed.


              • #8
                hi simon

                as a converse to the above

                i used deseq to analyse data from human t cells from controls (n=6) vs disease (n=5). the analysis that showed differential expression was based on a dispersion estimate using the method "per-condition" and sharing mode "fit-only"

                now having analysed the same samples using the standard deseq2 analysis, there is no differential expression between the 2 groups (at a fdr<0.1).

                is it reasonable to conclude that there is in fact no difference between groups as indicated by the deseq2 analysis and that the only reason deseq gave a significant result was becuase the tweaks above rendered the analysis less robust?


                • #9
                  Why did you use the "fit-only" mode? Ysing DESeq with "fit-only" is not robust, which is why we advise against using it except in special cases. The more robust analysis with DESeq2 is most likely closer to the truth.


                  • #10
                    Hi there, in my case, I got some genes that were significant (padj of 0.000000000495 in DESeq and also 2.06326E-15 in edgeR) but got NA in DESeq2. I am not sure what is the particular behaviour of NA or when will DESeq2 return NA. I used the standard pipeline of DESeq2 as written in the manual and the counts for this gene are as follow:
                    7 30 60 24 35
                    127 188 101 236 116

                    And I did get some output from DESeq2, not all fields are NA:
                    baseMean log2FoldChange lfcSE pvalue padj
                    89.18039938 1.87590805 0.299465406 NA NA


                    • #11
                      hi choishingwan,

                      Could you provide the version number for DESeq2? Also could you provide the output of mcols( dds )[geneNumber, ]

                      It could either be due to a count outlier (although it doesn't appear as if any of these are outliers) or due to independent filtering if you are using the latest development branch.

                      I plan to add a metadata column which would explain why a p-value is set to NA.


                      • #12
                        Hi Michael,

                        Thank you for your reply, my sessionInfo are as follow:

                        R version 3.0.1 (2013-05-16)
                        Platform: x86_64-w64-mingw32/x64 (64-bit)
                        [1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252    LC_MONETARY=English_United States.1252 LC_NUMERIC=C                           LC_TIME=English_United States.1252    
                        attached base packages:
                        [1] parallel  stats     graphics  grDevices utils     datasets  methods   base     
                        other attached packages:
                        [1] DESeq2_1.0.18           RcppArmadillo_0.3.900.0 Rcpp_0.10.4             lattice_0.20-15         Biobase_2.20.1          GenomicRanges_1.12.4    IRanges_1.18.2          BiocGenerics_0.6.0     
                        loaded via a namespace (and not attached):
                         [1] annotate_1.38.0      AnnotationDbi_1.22.6 DBI_0.2-7            genefilter_1.42.0    grid_3.0.1           locfit_1.5-9.1       RColorBrewer_1.0-5   RSQLite_0.11.4       splines_3.0.1        stats4_3.0.1         survival_2.37-4     
                        [12] XML_3.98-1.1         xtable_1.7-1

                        When using mcols(dds)[geneNumber,], I've got the following:

                        DataFrame with 1 row and 26 columns
                           baseMean   baseVar   allZero dispGeneEst dispGeneEstConv    dispFit dispersion  dispIter  dispConv dispOutlier   dispMAP Intercept condition_Control_vs_Case SE_Intercept SE_condition_Control_vs_Case WaldStatistic_Intercept
                          <numeric> <numeric> <logical>   <numeric>       <logical>  <numeric>  <numeric> <numeric> <logical>   <logical> <numeric> <numeric>                 <numeric>    <numeric>                    <numeric>               <numeric>
                        1   89.1804  4592.645     FALSE   0.1996928            TRUE 0.02251914  0.1365872         9      TRUE       FALSE 0.1365872  5.166256                  1.875908    0.2398174                    0.2994654                21.54245
                          WaldStatistic_condition_Control_vs_Case WaldPvalue_Intercept WaldPvalue_condition_Control_vs_Case WaldAdjPvalue_Intercept WaldAdjPvalue_condition_Control_vs_Case  betaConv  betaIter  deviance  maxCooks cooksOutlier
                                                        <numeric>            <numeric>                            <numeric>               <numeric>                               <numeric> <logical> <numeric> <numeric> <numeric>    <logical>
                        1                                 6.26419                   NA                                   NA                      NA                                      NA      TRUE         9  95.78522  2.211495         TRUE

                        It seems messy when I paste the output here, is there anyway for me to make it more organized?
                        Last edited by choishingwan; 07-24-2013, 11:48 PM. Reason: Just make the output more organize


                        • #13
                          It seems messy when I paste the output here, is there anyway for me to make it more organized?
                          Use the CODE markup:
                           Put "[CODE_]" (without the "_") before and "[/CODE_]" 
                          (again, without the "_") after the block to make it appear in monospace and
                          with line breaks at the right places.


                          • #14
                            The Cook's cutoff used here is the .75 quantile of the F(p, m-p) distribution. So for 2 parameters (intercept and condition) and 10 samples, we use a cutoff of:

                            > qf(.75, 2, 10 - 2)
                            [1] 1.656854

                            The sample with count 60 is getting 2.211495, a high estimate of Cook's distance (meaning the log fold change would change a lot if this sample were removed), although from looking at the counts I agree that this is not desirable behavior to give this gene a p-value of NA. I will investigate how we might better set this cutoff.

                            You can set a higher cutoff manually or turn off the filtering by Cook's with:

                            dds <- estimateSizeFactors(dds)
                            dds <- estimateDispersions(dds)
                            # or use a higher cutoff:
                            p <- 2
                            m <- 10
                            dds <- nbinomWaldTest(dds, cooksCutoff=qf(.95,p,m-p))

                            # turn off filtering:
                            dds <- nbinomWaldTest(dds, cooksCutoff=FALSE)
                            Last edited by Michael Love; 07-25-2013, 02:22 AM.


                            • #15
                              I see, maybe I should try turning off the Cook's filtering and see if these NA still persist.

                              Thank you


                              Latest Articles


                              • seqadmin
                                Advanced Methods for the Detection of Infectious Disease
                                by seqadmin

                                The recent pandemic caused worldwide health, economic, and social disruptions with its reverberations still felt today. A key takeaway from this event is the need for accurate and accessible tools for detecting and tracking infectious diseases. Timely identification is essential for early intervention, managing outbreaks, and preventing their spread. This article reviews several valuable tools employed in the detection and surveillance of infectious diseases.
                                11-27-2023, 01:15 PM
                              • seqadmin
                                Strategies for Investigating the Microbiome
                                by seqadmin

                                Microbiome research has led to the discovery of important connections to human and environmental health. Sequencing has become a core investigational tool in microbiome research, a subject that we covered during a recent webinar. Our expert speakers shared a number of advancements including improved experimental workflows, research involving transmission dynamics, and invaluable analysis resources. This article recaps their informative presentations, offering insights...
                                11-09-2023, 07:02 AM





                              Topics Statistics Last Post
                              Started by seqadmin, Yesterday, 08:23 AM
                              0 responses
                              Last Post seqadmin  
                              Started by seqadmin, 12-01-2023, 09:55 AM
                              0 responses
                              Last Post seqadmin  
                              Started by seqadmin, 11-30-2023, 10:48 AM
                              0 responses
                              Last Post seqadmin  
                              Started by seqadmin, 11-29-2023, 08:26 AM
                              0 responses
                              Last Post seqadmin