Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Pearson correlation in R

    Guys,
    I tried to compare two vectors and each vector is a RPKM value I calculated using ChIP-seq reads. I want to use pearson correlation as a way to access the similarity between two ChIP-seq datasets.

    The two RPKM values are values in 500bp bins genome-wide, for mouse. So there are more than 5 million bins in total. I checked the difference between two RPKM value files and they are actually very similar. So presumably the pearson correlation coefficient should be very close to 1.

    However, when I used cor.test() in R, the output is like this:
    Pearson's product-moment correlation

    data: a$V4 and b$V4
    t = -0.0037, df = 5309833, p-value = 0.9971
    alternative hypothesis: true correlation is not equal to 0
    95 percent confidence interval:
    -0.0008521632 0.0008489671
    sample estimates:
    cor
    -1.598034e-06
    I couldn't figure out why I got a weird correlation like that.. does anyone have an idea of why it's the case? Did I do something completely wrong?

    Thanks a lot!

  • #2
    can you provide a scatterplot a$V4 vs b$V4 (probably best to subsample)?

    just 2 additional cents: pearson correlations do not provide much information on the similarity of ChIPSeq samples given that most of the data points reflect background reads. in other words you get the correlation coefficient of the background noise.

    Comment


    • #3
      Simply do cor() instead of cor.test() (which performs a statistical test instead of reporting the correlation). You can do cor(x, method="spearman") for a Spearman rank correlation.

      However, the I agree on mudshark's caveat; you should actually not trust the results because most of the correlation will come from zero counts or low non-specific (noise) alignments.

      Comment


      • #4
        Originally posted by mudshark View Post
        can you provide a scatterplot a$V4 vs b$V4 (probably best to subsample)?

        just 2 additional cents: pearson correlations do not provide much information on the similarity of ChIPSeq samples given that most of the data points reflect background reads. in other words you get the correlation coefficient of the background noise.
        Thanks for your suggestion. I've plotted a scatterplot and it doesn't look right. Only 3 dots on the plot.. with the help from people i the lab, I found a bug in my code to calculate RPKM... now that problem is fixed.

        Regarding the usefulness of pearson correlations on ChIP-Seq data, I agree that the majority of the regions will have a PRKM value of 0.. do you know how much this affects interpretation of correlation? What would you do to compare two ChIP-Seq? Do you call peaks and calculate the overlap of peaks?

        My purpose of doing this is to access how saturated my ChIP-seq data is.. so I split my ChIP-Seq data for ONE sample into 2 subsets and then random sample 10%, 20%, 40%, 60% etc of reads from each subset and I use pearson correlation between RPKM values (two vectors) of 2 subsets as a way to access saturation.. I felt pearson correlation would do the job for this purpose. What do you think?

        Thanks so much!
        Last edited by gene_x; 02-20-2013, 01:46 PM.

        Comment


        • #5
          Originally posted by kopi-o View Post
          Simply do cor() instead of cor.test() (which performs a statistical test instead of reporting the correlation). You can do cor(x, method="spearman") for a Spearman rank correlation.

          However, the I agree on mudshark's caveat; you should actually not trust the results because most of the correlation will come from zero counts or low non-specific (noise) alignments.
          cor() and cor.test() gave the same correlation coefficient actually.

          My problem is fixed now. If you are interested.. please take a look at my reply to mudshark's post and let me know what your thoughts are.

          Thanks a lot!

          Comment


          • #6
            I would recommend you to look at some tools for ChIP-seq data comparison such as MANorm (http://bcb.dfci.harvard.edu/~gcyuan/MAnorm/MAnorm.htm) or perhaps SeqMiner (http://bips.u-strasbg.fr/seqminer/tiki-index.php) or CHANCE (http://www.ncbi.nlm.nih.gov/pubmed/23068444).

            The thing is that ChIP-seq data actually is mostly noise (to simplify a bit, a small fraction of the reads are in peaks) so one needs to be careful to really look at the informative part of the data.

            Comment


            • #7
              Originally posted by kopi-o View Post
              I would recommend you to look at some tools for ChIP-seq data comparison such as MANorm (http://bcb.dfci.harvard.edu/~gcyuan/MAnorm/MAnorm.htm) or perhaps SeqMiner (http://bips.u-strasbg.fr/seqminer/tiki-index.php) or CHANCE (http://www.ncbi.nlm.nih.gov/pubmed/23068444).

              The thing is that ChIP-seq data actually is mostly noise (to simplify a bit, a small fraction of the reads are in peaks) so one needs to be careful to really look at the informative part of the data.
              Thanks for the links, I'll take a look at them!

              Comment


              • #8
                Another tool to try is the Bioconductor package DiffBind:


                It gives you a number of ways to compare ChIP-seq samples using a variety of scores and normalisations. It will do exactly these types of correlations and show how samples cluster. The vignette walks through an example in some detail.

                It does rely on some sort of intervals of interest being identified beforehand. You can use traditional peak calling, or a windowing scheme. To compare replicates we often use all the promoters (windows 1000bp upstream from each TSS; while this isn;t always appropriate, it usually gives a good idea of how similar the signal in different samples are.

                Comment


                • #9
                  Originally posted by rory View Post
                  Another tool to try is the Bioconductor package DiffBind:


                  It gives you a number of ways to compare ChIP-seq samples using a variety of scores and normalisations. It will do exactly these types of correlations and show how samples cluster. The vignette walks through an example in some detail.

                  It does rely on some sort of intervals of interest being identified beforehand. You can use traditional peak calling, or a windowing scheme. To compare replicates we often use all the promoters (windows 1000bp upstream from each TSS; while this isn;t always appropriate, it usually gives a good idea of how similar the signal in different samples are.
                  OK. Thanks for the info.

                  Comment


                  • #10
                    Originally posted by rory View Post
                    Another tool to try is the Bioconductor package DiffBind:


                    It gives you a number of ways to compare ChIP-seq samples using a variety of scores and normalisations. It will do exactly these types of correlations and show how samples cluster. The vignette walks through an example in some detail.

                    It does rely on some sort of intervals of interest being identified beforehand. You can use traditional peak calling, or a windowing scheme. To compare replicates we often use all the promoters (windows 1000bp upstream from each TSS; while this isn;t always appropriate, it usually gives a good idea of how similar the signal in different samples are.
                    Hi Rory,

                    I googled this thread because I want to you do exactly you suggested using DiffBind. However, a "score" column is needed to create a dba object at the beginning. Obviously, the artificial bed files with all promoters cannot have the score column. Please let me know how you import promoter regions in DiffBind.

                    Many thanks.
                    Xin
                    Mount Sinai School of Medicine

                    BTW, I tried to use "." in the score column and got the following error message:
                    Error in Summary.factor(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : ‘max’ not meaningful for factors

                    Comment


                    • #11
                      Try:

                      > samples<-dba(sampleSheet="samples.csv", scoreCol=0)

                      This tells DiffBind that there are no scores. Note that everything will get an identical score of 1, so clustering/PCA plots won't be very interesting until you get read-based scores using dba.count().

                      Cheers-

                      Rory

                      Comment

                      Latest Articles

                      Collapse

                      • seqadmin
                        Best Practices for Single-Cell Sequencing Analysis
                        by seqadmin



                        While isolating and preparing single cells for sequencing was historically the bottleneck, recent technological advancements have shifted the challenge to data analysis. This highlights the rapidly evolving nature of single-cell sequencing. The inherent complexity of single-cell analysis has intensified with the surge in data volume and the incorporation of diverse and more complex datasets. This article explores the challenges in analysis, examines common pitfalls, offers...
                        06-06-2024, 07:15 AM
                      • seqadmin
                        Latest Developments in Precision Medicine
                        by seqadmin



                        Technological advances have led to drastic improvements in the field of precision medicine, enabling more personalized approaches to treatment. This article explores four leading groups that are overcoming many of the challenges of genomic profiling and precision medicine through their innovative platforms and technologies.

                        Somatic Genomics
                        “We have such a tremendous amount of genetic diversity that exists within each of us, and not just between us as individuals,”...
                        05-24-2024, 01:16 PM

                      ad_right_rmr

                      Collapse

                      News

                      Collapse

                      Topics Statistics Last Post
                      Started by seqadmin, 06-21-2024, 07:49 AM
                      0 responses
                      14 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 06-20-2024, 07:23 AM
                      0 responses
                      14 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 06-17-2024, 06:54 AM
                      0 responses
                      16 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 06-14-2024, 07:24 AM
                      0 responses
                      25 views
                      0 likes
                      Last Post seqadmin  
                      Working...
                      X