Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bioBob
    replied
    If you are going to use large but exact matches, why not use blast and up the word size? This goes pretty fast at say 19.

    Leave a comment:


  • gsgs
    replied
    yes, but why are they all doing it the wrong way
    despite so much research and papers ??

    Leave a comment:


  • Jeremy
    replied
    Surely blast or CD-HIT would be easier than trying to code your own algorithm?

    Leave a comment:


  • gsgs
    replied
    if n is the number of nucleotides in the smaller one of the 2 files to be compared
    then the memory requirement is ~4*n bits or n/2 bytes.
    I tried it with the 15-substring table (4^15 bits = 134MB) on human chromosomes of length
    > 200MB and it seemed to work well.

    fake hits are usually substrings with repeats or high content with one or 2 nucleotides only,
    i.e. high content of nucleotide T. Now I need a database with frequent such "fake" hits
    so I can exclude them ...does it exist ?
    No big problem, if also real hits are excluded, the sequences are long and there will be other hits
    if there is real common ancestry.

    searching for the mentioned software, I found:


    so many programs ...
    I still don't understand why anyone would use a different method, at least as a first step
    to reduce the size of the file of possible candidates.
    What can be faster than basically just the time required to load the data into memory ? (O(n))
    Fast memory caching could become a problem with GB-sets, but I didn't see this yet.

    ----------------------------------------------------------------------------------------------

    e.g. comparing human chromosome 1: (30,15)

    224999690 15-substrings were read from 1 sequences from file f:\hg18\chr01
    these gave 136840909 (=60.82%) different markings in the table

    chimpanzee
    217189828 15-substrings were read from file f:\chimp\chr01
    192230629 (=88.51%) of these were marked in the table
    155119260 (=71.42%)matching 30-15-substrings were found

    gorilla
    212549001 15-substrings were read from file f:\gorill\chr01
    186188324 (=87.60%) of these were marked in the table
    143706656 (=67.61%)matching 30-15-substrings were found

    macaca mulatta
    219576101 15-substrings were read from file f:\macmul\chr01
    139018325 (=63.31%) of these were marked in the table
    54682566 (=24.90%)matching 30-15-substrings were found

    human chromosome 2 (unrelated)
    237709794 15-substrings were read from file f:\hg18\chr02
    123413391 (=51.92%) of these were marked in the table
    31633817 (=13.31%)matching 30-15-substrings were found
    (repititions,unusual strings, etc.)

    Leave a comment:


  • xied75
    replied
    Ok, you are using a Moving Window of size 15 to build the hash. That means you can't have mismatch or gap within this 15b (it's the key to look up from hash table), you can have gap or mismatch between multiple hits, but not within any 15b hit.

    This will take too much memory once the set is large.

    This is almost the scheme of first gen aligner, including ELAND, MAQ, and SOAPv1.

    If you want a fast inexact solution, I'll take one set into FASTA, call BWA INDEX on it; then take another set into FASTQ, call BWA ALN. (Many people do this.)

    Leave a comment:


  • gsgs
    replied
    --------------------------------------------------------------------------------

    > What you are doing is 'Hash Join' in RDBMS.

    Thanks.

    I give the number (and %) of matching 15-substrings, number of matching 30-15-substrings
    (substrings of length 30 each of whose 15 substrings are marked in the table)
    with an option to print the matching 30-15-substrings, the record number
    if it's a fasta-file, and the entry-position number in the record

    You could allow for gaps, so e.g. only 15 or 14 of the 16 substrings are required to be marked
    or mark the double number of strings from file 1 (gap somewhere) or such.
    But I don't feel that this would improve things a lot.

    30,15 are variable, depending on filesize and available memory, memory-cache size



    -------------------------------------------------

    RDBMS:




    Last edited by gsgs; 01-07-2013, 09:50 AM.

    Leave a comment:


  • xied75
    replied
    What you are doing is 'Hash Join' in RDBMS. It's like

    Code:
    select * from t1 inner join t2 on t1.column1 = t2.column1
    Then the DB engine will build a hash out of t2, then using t1 rows to probe this hash.

    So in the end you want a report saying two sets have xxxxx rows in common, xxxx for set A only, xxxx for set B only, and draw a Venn diagram?

    This is EXACT match. If Inexact match allowed, i.e. a defined number of mismatch, gap open, etc. then this turns into classic alignment problem.

    Leave a comment:


  • gsgs
    replied
    OK, this came up in another thread (well, 2 threads) before and I thought to myself
    that the methods being used are just ineffective and there is a better way.

    I build a binary table of used substrings of length 15 (well, length 16 or 17 if both files are 1GB ?) in file 1
    and then look it up for each new read nucleotide (and thus 15-substring) in file 2
    This is almost as fast as reading the two files from HD into memory.

    But just checking for 15-substring-matches is not enough, too short, too many random matches.
    So I check for 30-substrings each of whose 16 15-substrings are marked in the binary table.

    I found that this worked very well in practice and was wondering how the method is called and
    implementations or papers about it, but couldn't find any.
    Instead I found lots of info about blast and other methods which apparently are much slower and more complicated and lots of efforts put into it.

    Leave a comment:


  • xied75
    replied
    Full string exact match is easy to tell, how do you want compare substring then?

    Leave a comment:


  • gsgs
    replied
    yes, 1000*1000 bp = 1MB as fasta file, sorry I miscalculated.
    So, lets say 1000000 sequences of 1000 bp

    stringmatch or whatever is suitable to find genetical relatives

    yes, I meant substring instead of subsequence

    Smith Waterman would be too slow ?!

    Leave a comment:


  • xied75
    replied
    1000 of 1000bp is 1MB?

    When you say match you mean string match?
    When you say subsequence match you mean like substring?
    Or you mean the best aligned pair? (Smith-Waterman)

    Leave a comment:


  • gsgs
    started a topic comparing large sets of sequences

    comparing large sets of sequences

    comparing large sets of sequences

    suppose you have 2 large sequences or sets of sequences that you want to compare
    for matching entries.
    E.g. you sequenced some ancient bone and want to check for bacterial contamination

    For simplicity assume you have 2 sets of 1000 nucleotide sequences of length 1000 ,
    1GB each set that you want to compare against each other, find the best pairs of matching
    sequences or subsequences.
    Sounds like a standard problem, doesn't it ?

    How is it done ? What is the best, fastest method ?

Latest Articles

Collapse

  • seqadmin
    Recent Advances in Sequencing Technologies
    by seqadmin



    Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

    Long-Read Sequencing
    Long-read sequencing has seen remarkable advancements,...
    12-02-2024, 01:49 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Yesterday, 07:59 AM
0 responses
8 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-09-2024, 08:22 AM
0 responses
9 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-02-2024, 09:29 AM
0 responses
171 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-02-2024, 09:06 AM
0 responses
61 views
0 likes
Last Post seqadmin  
Working...
X