Header Leaderboard Ad

Collapse

Subsampling using 'head -n #"?

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • gringer
    replied
    I do this often (mostly to make sure that the output data format is correct) and often find that it's a really bad representation of the data due to strange biases, low mapping quality, and odd mappings with unequal distribution to name a few. SOLiD or Illumina output files have a pile of rubbish at the start of the file from bad sequencing reads (the edges of chips / cells seem to be more prone to error), which heavily influences the results gleaned from the first few reads.

    If you've been already doing this, I'm somewhat surprised you haven't discovered this already.

    If you can afford the time and want something more representative but keeping within quick command-line stuff, use 'shuf -n' instead of 'head -n'. You could also try dumping the first million or so reads (or more), then take the next million or so (but there'll still be some bias there as well).
    Last edited by gringer; 12-28-2011, 11:18 AM.

    Leave a comment:


  • jbrwn
    replied
    it's definitely not very rigorous. htseq has a fastq reader in it which could help you subsample randomly.
    htseq tour
    previous discussion on subsampling paired-end data:
    http://seqanswers.com/forums/showthread.php?t=12070

    Leave a comment:


  • kga1978
    started a topic Subsampling using 'head -n #"?

    Subsampling using 'head -n #"?

    Hi all,

    I often subsample my fastq files by using the unix 'head' command, rather than randomly getting reads from random positions in my fastq file. My setup is as follows:

    Casava output
    Code:
    file1.fastq.gz
    file2.fastq.gz
    .
    .
    file#.fastq.gz
    I concatenate these files using the following:

    Code:
    gunzip *.gz -c | gzip -9 > file.fastq.gz
    In this way, the first part of this file will contain the reads from the original 'file1.fastq.gz'. I then subsample 25.000 reads from this file using the following command:

    Code:
    head -n 100000 file.fastq.gz | <downstream analysis, e.g. blastn>
    My worry is that by doing that, I will get some sort of bias in my analysis as I am only taking the 'head' of the first part of all my reads. The question is - is this a valid concern? I.e. are the reads in the first part of, say, 'file1.fastq.gz' somehow different than, say, the middle part of 'file4.fastq.gz'?

    Thanks very much in advance
Working...
X