Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Permutation Strategies for RAD population genomics

    I have a set of RAD data sampled from many individuals from several populations and I would like to calculate various population genetics statistics continuously along the genome with the ultimate goal of identifying regions of the genome that may be under selection.

    I currently have a vcf file containing all genotyped bases including invariant sites, and I have developed a few scripts to calculate the stats I'm interested in from that file. Calculating the statistics the first time is easy, but the literature suggests that this is not enough. GWAS literature typically permutes the case/control (in my situation, "population") labels, generates a bootstrap sample via sampling with replacement, and recalculates the statistic, then repeats this process many times to produce a smooth distribution for this particular statistic under neutral expectations.

    This is pretty much what Stacks already does very nicely. However, for several reasons, Stacks won't work with my Type II B RAD data.

    And I cannot use the otherwise excellent vcftools package to calculate some of my statistics such as pi or Tajima's D, because it assumes that any base that is omitted from the vcf (as in, not genotyped) is invariant, which is most definitely not true for my data. Additionally, vcftools is more of a first-pass tool, and while I could probably work it into a pipeline, it doesn't do permutation/bootstrapping/etc by itself.

    I am hoping that someone may have encountered similar problems using GWAS approaches for population genetics/population genomics and may have some advice for me.

    Namely,

    1) Are there any existing tools besides Stacks and BayeScan (model-based approach for Fst) that do what I'm trying to do already?

    2) Is this permute-bootstrap-recalc per locus strategy the best approach? Basically, my strategy is to develop a null distribution per each locus; Hohenlohe et al 2010, at least for pi, heterozygosity, Fst, and rho, generated a genome-wide null distribution via calculating the stats once per locus, then sampling those values from across the genome, weighting for sample size, to develop the genome-wide null distribution.

    As a third option, someone suggested that simply testing for autocorrelation/LD among my pop gen statistics may be enough (i.e., "do I expect to see these three SNPs with a value greater than X as neighbors, or could this happen by chance?"). I'm not convinced; also, this approach necessitates defining whatever "X" cutoff you're using in some manner.

    Which approach is more robust? I think that the per-locus approach may be more conservative, and may also produce fewer false positives... but I am also terrible at statistics. Thoughts? Alternatives?

    3) Regardless of strategy, how many times should the resampling process be repeated? Until the null distribution "looks smooth enough"? How can this be tested?

    4) What are the benefits of using a sliding window approach? I understand the kernel smoothing idea where one weights each point's value by both the value and distance of the statistics at neighboring SNPs, and I like it -- but is it appropriate to use if my null distribution is locus-specific and not genome-wide?

    Thank you for any hints, comments or advice you can give me!
    Roxana

  • #2
    2bRAD, Stacks, and Fst

    Hi Roxana,

    What about the 2bRAD data makes it not useable in Stacks? I know that the demultiplexing/cleaning program in Stacks (process_radtags) does not yet support 2bRAD data, but I assume you have cleaned and demultiplexed your data already. Are there other factors that make Stacks unable to process it?

    I don't have any 2bRAD data myself but would like to see it supported by our software.

    Best,

    julian

    Comment

    Latest Articles

    Collapse

    • seqadmin
      Current Approaches to Protein Sequencing
      by seqadmin


      Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
      04-04-2024, 04:25 PM
    • seqadmin
      Strategies for Sequencing Challenging Samples
      by seqadmin


      Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
      03-22-2024, 06:39 AM

    ad_right_rmr

    Collapse

    News

    Collapse

    Topics Statistics Last Post
    Started by seqadmin, 04-11-2024, 12:08 PM
    0 responses
    32 views
    0 likes
    Last Post seqadmin  
    Started by seqadmin, 04-10-2024, 10:19 PM
    0 responses
    37 views
    0 likes
    Last Post seqadmin  
    Started by seqadmin, 04-10-2024, 09:21 AM
    0 responses
    31 views
    0 likes
    Last Post seqadmin  
    Started by seqadmin, 04-04-2024, 09:00 AM
    0 responses
    53 views
    0 likes
    Last Post seqadmin  
    Working...
    X