Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • simobioinfo
    replied
    Hi,
    I'm working with targeted Ion torrent PGM data.
    I would like to know if there are methods to identify CNVs from such a kind of data.
    Thank you in advance

    Leave a comment:


  • krobison
    replied
    http://www.ncbi.nlm.nih.gov/pubmed/21701589 shows that both homozygous deletions and high level amplifications can be identified from exome data

    Leave a comment:


  • Dethecor
    replied
    CNVs with exome sequencing

    I think that depends on what technology you use and what you want to do with your data, from what I understand one would use whole-exome sequencing primarily for SNP detection and maybe finding some small indels?! In that case the normalisation is not required since you're only looking for ratios or reads that map with insertions / deletions, but not for how many of those you have.

    In theory you can still correct for mappability and gc-content with this kind of data, but depending on you wet-lab protocol some additional effects might occur that would make it hard to call ploidy / copy number variation.
    For example if you use an exon-array to extract all the exonic DNA before sequencing, the binding affinity of the probes might play a role (if you're lucky it's mainly determined by the probe-GC content and you can also normalize it away later on) . . . you might also get saturation of certain probes (creating a theoretical maximum copy number that you could still detect).

    I guess the inconclusive answer here is: It depends!

    Cheers,
    Paul

    p.s.: Is there precedence for CNV-calling with targeted sequencing? Also remember that you need a basis of "normal" regions to check against if you want to determine the CNV of an interesting gene.
    Last edited by Dethecor; 07-07-2011, 06:01 AM. Reason: Typo-Typo-Typo

    Leave a comment:


  • m_elena_bioinfo
    replied
    Paul,
    the last question is...
    the same reasoning (and the answer to my questions) goes for a targeted sequencing (target enrichment of some regions of genes or whole-exome)?

    Leave a comment:


  • m_elena_bioinfo
    replied
    Great! Your explanation is very clear!
    Thanx a lot again,
    Good work!
    Maria Elena

    Leave a comment:


  • Dethecor
    replied
    Normalization

    The kind of normalization I was thinking about is based on a property of the sequence and not relative to other samples.
    For example for two samples A and B with read-counts rc_a and rc_b you would maybe multiply the coverage in sample A by the ratio rc_b / rc_a to correct for the difference in library size.

    Independent of this library-size correction for multiple samples you might want to normalize for mappability within a single sample, for example like so:

    corrected_cvg[i] = coverage[i] / mappability[i]

    where coverage is the read count per position and mappability gives you for each position i the percentage of mappable positions in a window around that position i. The size of the window should be related to the length of your reads, since e.g. for a library with readlength 100 the number of reads overlapping each position i can only be influenced by the mappability in the interval [i-100..i+100].

    Maybe have a look here, for some ideas of what people do to normalize for gc-content and mappability. e.g. the first pubmed hit on the subject

    Leave a comment:


  • m_elena_bioinfo
    replied
    Dear Paul,
    thank for the quick reply!
    I work with human sample, so reference genome and relative annotation are available for my analysis.
    I have only question about your answer, probably I have not understand well enough it.
    How can I normalized the coverage of one sample if i have not depth of other samples?

    Leave a comment:


  • Dethecor
    replied
    Depth of Coverage

    Assuming that you have a reference genome for your organism you can still spot such things by looking at the depth-of-coverage. In this way you will be able to see regions where your sample had more or less copies than the reference. (basically assume most regions are present exactly once, determine the expected coverage per molecule of DNA/ copy of a region - this might be expected coverage per two copies, if your organism is diploid by default - and then find regions/windows/bins which have a significantly different depth-of-coverage thereby indicating a change in copy number of said region)
    I think the alignment method can play an important role here (align each read only once, etc.) and also you might want to try some normalization for GC-content and mappability to make things more comparable.

    Admittedly it's probably something you'll at least partly will have to implement yourself (using R/Bioconductor, Python/HTSeq or Bio<YourFavouriteScriptingLanguageHere> ...), but maybe I'm just not aware of tools that already incorporate all of this functionality out of the box?!

    Cheers,
    Paul

    Leave a comment:


  • m_elena_bioinfo
    started a topic CNV from only one sample

    CNV from only one sample

    Dear NGS user,
    anyone knows if and how can I analyse CNV in ONLY one sample from next-gen DNA sequencing without having controls or other samples for comparison?

    Thanx a lot to everybody,
    ME

Latest Articles

Collapse

  • seqadmin
    Exploring the Dynamics of the Tumor Microenvironment
    by seqadmin




    The complexity of cancer is clearly demonstrated in the diverse ecosystem of the tumor microenvironment (TME). The TME is made up of numerous cell types and its development begins with the changes that happen during oncogenesis. “Genomic mutations, copy number changes, epigenetic alterations, and alternative gene expression occur to varying degrees within the affected tumor cells,” explained Andrea O’Hara, Ph.D., Strategic Technical Specialist at Azenta. “As...
    07-08-2024, 03:19 PM
  • seqadmin
    Exploring Human Diversity Through Large-Scale Omics
    by seqadmin


    In 2003, researchers from the Human Genome Project (HGP) announced the most comprehensive genome to date1. Although the genome wasn’t fully completed until nearly 20 years later2, numerous large-scale projects, such as the International HapMap Project and 1000 Genomes Project, continued the HGP's work, capturing extensive variation and genomic diversity within humans. Recently, newer initiatives have significantly increased in scale and expanded beyond genomics, offering a more detailed...
    06-25-2024, 06:43 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 07-19-2024, 07:20 AM
0 responses
32 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-16-2024, 05:49 AM
0 responses
42 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-15-2024, 06:53 AM
0 responses
52 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-10-2024, 07:30 AM
0 responses
43 views
0 likes
Last Post seqadmin  
Working...
X