Announcement

Collapse
No announcement yet.

Coverage around genomic feature - large amount of data

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Coverage around genomic feature - large amount of data

    Hi,

    I have an alignment of human data (a small bam file with less than 1M reads), and would like to calculate and plot the coverage around specific genomic features.

    Specifically, I would like to calculate coverage of 100Kbp before and after each Transcription Start Site (TSS). This can be made in windows of 1000bp. I tried different methods, mostly with R and GRanges, but couldn't get the results because of the dimension of the outputted files. The probelm is that all TSS are around 200'000, so you can imagine how gigantic the outputs are.
    An average of of the values for each window will also work (i.e. for each 1Kbp window between +100Kbp and -100Kbp around each TSS, I can plot the average of all windows with the same distance from each TSS).

    Are there any program or memory-saving methods to do this?

    Thanks in advance.

  • #2
    If you want to stick inside R, the `EnrichedHeatmap` package (which is an extension of `ComplexHeatmap` for genomics) in Bioconductor is nice for rendering the type of heatmap you're trying to make. The matrix you're generating shouldn't be too big in memory if you're on a modern machine. What's probably causing you issues is rendering the heatmap inside R. I have yet to find a good solution to this in R-- ComplexHeatmap & EnrichedHeatmap are slow. You can speed them up by rendering as raster, at the cost of a reduced quality image (see here).

    Another option is to use DeepTools plotHeatmap, but it will also take a while to render the image.

    Comment

    Working...
    X