Hi,
I'm working with targeted Ion torrent PGM data.
I would like to know if there are methods to identify CNVs from such a kind of data.
Thank you in advance
Seqanswers Leaderboard Ad
Collapse
Announcement
Collapse
No announcement yet.
X
-
http://www.ncbi.nlm.nih.gov/pubmed/21701589 shows that both homozygous deletions and high level amplifications can be identified from exome data
Leave a comment:
-
CNVs with exome sequencing
I think that depends on what technology you use and what you want to do with your data, from what I understand one would use whole-exome sequencing primarily for SNP detection and maybe finding some small indels?! In that case the normalisation is not required since you're only looking for ratios or reads that map with insertions / deletions, but not for how many of those you have.
In theory you can still correct for mappability and gc-content with this kind of data, but depending on you wet-lab protocol some additional effects might occur that would make it hard to call ploidy / copy number variation.
For example if you use an exon-array to extract all the exonic DNA before sequencing, the binding affinity of the probes might play a role (if you're lucky it's mainly determined by the probe-GC content and you can also normalize it away later on) . . . you might also get saturation of certain probes (creating a theoretical maximum copy number that you could still detect).
I guess the inconclusive answer here is: It depends!
Cheers,
Paul
p.s.: Is there precedence for CNV-calling with targeted sequencing? Also remember that you need a basis of "normal" regions to check against if you want to determine the CNV of an interesting gene.
Leave a comment:
-
Paul,
the last question is...
the same reasoning (and the answer to my questions) goes for a targeted sequencing (target enrichment of some regions of genes or whole-exome)?
Leave a comment:
-
Great! Your explanation is very clear!
Thanx a lot again,
Good work!
Maria Elena
Leave a comment:
-
Normalization
The kind of normalization I was thinking about is based on a property of the sequence and not relative to other samples.
For example for two samples A and B with read-counts rc_a and rc_b you would maybe multiply the coverage in sample A by the ratio rc_b / rc_a to correct for the difference in library size.
Independent of this library-size correction for multiple samples you might want to normalize for mappability within a single sample, for example like so:
corrected_cvg[i] = coverage[i] / mappability[i]
where coverage is the read count per position and mappability gives you for each position i the percentage of mappable positions in a window around that position i. The size of the window should be related to the length of your reads, since e.g. for a library with readlength 100 the number of reads overlapping each position i can only be influenced by the mappability in the interval [i-100..i+100].
Maybe have a look here, for some ideas of what people do to normalize for gc-content and mappability. e.g. the first pubmed hit on the subject
Leave a comment:
-
Dear Paul,
thank for the quick reply!
I work with human sample, so reference genome and relative annotation are available for my analysis.
I have only question about your answer, probably I have not understand well enough it.
How can I normalized the coverage of one sample if i have not depth of other samples?
Leave a comment:
-
Depth of Coverage
Assuming that you have a reference genome for your organism you can still spot such things by looking at the depth-of-coverage. In this way you will be able to see regions where your sample had more or less copies than the reference. (basically assume most regions are present exactly once, determine the expected coverage per molecule of DNA/ copy of a region - this might be expected coverage per two copies, if your organism is diploid by default - and then find regions/windows/bins which have a significantly different depth-of-coverage thereby indicating a change in copy number of said region)
I think the alignment method can play an important role here (align each read only once, etc.) and also you might want to try some normalization for GC-content and mappability to make things more comparable.
Admittedly it's probably something you'll at least partly will have to implement yourself (using R/Bioconductor, Python/HTSeq or Bio<YourFavouriteScriptingLanguageHere> ...), but maybe I'm just not aware of tools that already incorporate all of this functionality out of the box?!
Cheers,
Paul
Leave a comment:
-
CNV from only one sample
Dear NGS user,
anyone knows if and how can I analyse CNV in ONLY one sample from next-gen DNA sequencing without having controls or other samples for comparison?
Thanx a lot to everybody,
METags: None
Latest Articles
Collapse
-
by seqadmin
The human gut contains trillions of microorganisms that impact digestion, immune functions, and overall health1. Despite major breakthroughs, we’re only beginning to understand the full extent of the microbiome’s influence on health and disease. Advances in next-generation sequencing and spatial biology have opened new windows into this complex environment, yet many questions remain. This article highlights two recent studies exploring how diet influences microbial...-
Channel: Articles
02-24-2025, 06:31 AM -
ad_right_rmr
Collapse
News
Collapse
Topics | Statistics | Last Post | ||
---|---|---|---|---|
Started by seqadmin, 03-03-2025, 01:15 PM
|
0 responses
172 views
0 likes
|
Last Post
by seqadmin
03-03-2025, 01:15 PM
|
||
Started by seqadmin, 02-28-2025, 12:58 PM
|
0 responses
264 views
0 likes
|
Last Post
by seqadmin
02-28-2025, 12:58 PM
|
||
Started by seqadmin, 02-24-2025, 02:48 PM
|
0 responses
650 views
0 likes
|
Last Post
by seqadmin
02-24-2025, 02:48 PM
|
||
Started by seqadmin, 02-21-2025, 02:46 PM
|
0 responses
265 views
0 likes
|
Last Post
by seqadmin
02-21-2025, 02:46 PM
|
Leave a comment: