I'm working with whole exome sequence data from a mouse breast cancer model. I've done mutation calling using the GATK best practices workflow (BWA mem (mm10) -> Picard deduplicate -> base quality score recalibration -> Mutect2. Annotated .vcfs with SnpEff). I now wanted to compare my variants to the COSMIC data base to see if the orthologous variants have been found in human, and am having trouble figuring out the most appropriate way to do this.
My first thought was to convert my VCF files to .bed files and use the UCSC liftover tool to convert the files from m10 to hg19. I've however also found a publication in which they perform what I'm looking to do, and in their methods they write:
I've looked into Clustal Omega, but I'm having trouble understanding how to implement it into this sort of workflow.
I'm wondering if anyone has done this sort of analysis, and what workflow you've had the most success with?
Thanks!
My first thought was to convert my VCF files to .bed files and use the UCSC liftover tool to convert the files from m10 to hg19. I've however also found a publication in which they perform what I'm looking to do, and in their methods they write:
Variants were compared to known human somatic mutations as available via the COSMIC database38. Briefly, the mouse and human sequences for homologous proteins were pairwise aligned using Clustal Omega39 and the human protein position homolo- gous to the mouse mutation was used to query COSMIC for known missense and nonsense mutations at or surrounding this peptide position. Local conservation was determined after sequence alignment using a 6 10 amino acid residue window surrounding the substituted amino acid.
I'm wondering if anyone has done this sort of analysis, and what workflow you've had the most success with?
Thanks!
Comment