Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Subsampling using 'head -n #"?

    Hi all,

    I often subsample my fastq files by using the unix 'head' command, rather than randomly getting reads from random positions in my fastq file. My setup is as follows:

    Casava output
    Code:
    file1.fastq.gz
    file2.fastq.gz
    .
    .
    file#.fastq.gz
    I concatenate these files using the following:

    Code:
    gunzip *.gz -c | gzip -9 > file.fastq.gz
    In this way, the first part of this file will contain the reads from the original 'file1.fastq.gz'. I then subsample 25.000 reads from this file using the following command:

    Code:
    head -n 100000 file.fastq.gz | <downstream analysis, e.g. blastn>
    My worry is that by doing that, I will get some sort of bias in my analysis as I am only taking the 'head' of the first part of all my reads. The question is - is this a valid concern? I.e. are the reads in the first part of, say, 'file1.fastq.gz' somehow different than, say, the middle part of 'file4.fastq.gz'?

    Thanks very much in advance

  • #2
    it's definitely not very rigorous. htseq has a fastq reader in it which could help you subsample randomly.
    htseq tour
    previous discussion on subsampling paired-end data:
    http://seqanswers.com/forums/showthread.php?t=12070

    Comment


    • #3
      I do this often (mostly to make sure that the output data format is correct) and often find that it's a really bad representation of the data due to strange biases, low mapping quality, and odd mappings with unequal distribution to name a few. SOLiD or Illumina output files have a pile of rubbish at the start of the file from bad sequencing reads (the edges of chips / cells seem to be more prone to error), which heavily influences the results gleaned from the first few reads.

      If you've been already doing this, I'm somewhat surprised you haven't discovered this already.

      If you can afford the time and want something more representative but keeping within quick command-line stuff, use 'shuf -n' instead of 'head -n'. You could also try dumping the first million or so reads (or more), then take the next million or so (but there'll still be some bias there as well).
      Last edited by gringer; 12-28-2011, 11:18 AM.

      Comment


      • #4
        Thanks for the replies - that's very helpful.

        I have only been running this over the last couple of days (but we have loads where I need to do it) and didn't notice anything majorly wrong, other than some weird QC scores for a couple of files (gringer, I guess that is what you have seen as well).

        We have a script to subsample correctly, so I will probably just use that - the main problem being that it needs to read the entire file before being able to output a subsample. In that sense 'head' was a lot faster.

        gringer, as for shuf, I don't think that will work? Reason being that each read spans 2 (fasta) or 4 (fastq) lines? Otherwise, this is exactly what I would be looking for - given a file of n lines, randomly take a read from line k (so 4 lines, always starting at a position that can be divided by 4). But I haven't been able to find a script or command that will do that.

        Comment


        • #5
          Originally posted by kga1978 View Post
          I then subsample 25.000 reads from this file using the following command:

          Code:
          head -n 100000 file.fastq.gz | <downstream analysis, e.g. blastn>
          I assume that is a typo, since you must unzip the gripped FASTQ before taking the first 1000000 lines.
          Originally posted by kga1978 View Post
          IMy worry is that by doing that, I will get some sort of bias in my analysis as I am only taking the 'head' of the first part of all my reads. The question is - is this a valid concern?
          Yes, since the read order produced by Illumina reflects the slide positions, so the first and last reads will be from the edges and generally of poorer quality (so I've read).

          Comment


          • #6
            Originally posted by kga1978 View Post
            gringer, as for shuf, I don't think that will work? Reason being that each read spans 2 (fasta) or 4 (fastq) lines?
            You're correct... it won't work because of the serial nature of the files. Yet another case of me not thinking enough before posting.

            The only ways I can think of at the moment are fairly complex piped shell commands, or small programs.

            I guess one "simple" way to do it would be to seek to a random file position, then read forward until you find a start of sequence marker, but that's a lot of work for too little benefit. FWIW, doing a strict random seek on a fastq file is not possible because both @ and + can appear in the quality strings, and you can get multi-line fastq files.

            Edit: I've just thought of another way that would work for a single pass (i.e. could work for piped input), assuming you knew the approximate number of records in the file. You decide in advance what records will be selected (i.e. generate 1M numbers between 1 and 100M), sort the list, then run through the file and print only the records that appear in the list.
            Last edited by gringer; 12-28-2011, 04:39 PM.

            Comment


            • #7
              I assume that is a typo
              Sorry, yes, that was a typo - there's a gunzip in front of that.

              So, a talented student wrote up a quick script in python for me and it works really well. It appears that it is about as fast as writing to disk, so overall what I was looking for. It takes the following argument:

              Code:
              python subsampler.py <filename> <number of sequences to extract>
              The script is as follows:

              Code:
              #!/usr/bin/env python
              import random
              import sys
              
              #name of the input file (fasta or fastq)
              #assumes input file is standard fasta/fastq format
              fileName = sys.argv[1]
              #number of sequences to subsample
              numSeq = int(sys.argv[2])
              increment = 0
              
              #if it's a fasta file
              if (fileName.find(".fasta") != -1):
                increment = 2
              #else if it's a fastq file
              elif (fileName.find(".fastq") != -1):
                increment = 4
              #quit if neither
              else:
                sys.stdout.write("not a fasta/fastq file\n")
                sys.exit()
              
              FILE = open(fileName, 'r')
              reads = list()
              index = 0
              #put the entire file in a list
              for line in FILE:
                if(index % increment == 0):
                  reads.append(line)
                else:
                  reads[(index/increment)] += line
                index += 1
              FILE.close()
              
              #figure out total number of reads, error if applicable
              totalReads = index/increment
              if(totalReads < numSeq):
                sys.stdout.write("You only have "+str(totalReads)+" reads!\n")
                sys.exit()
              
              #shuffle the reads!
              random.shuffle(reads)
              #output the reads to stdout
              for i in range(0, numSeq):
                sys.stdout.write(reads[i])

              Comment


              • #8
                Originally posted by kga1978 View Post
                So, a talented student wrote up a quick script in python for me and it works really well. It appears that it is about as fast as writing to disk, so overall what I was looking for.
                This code is fine, but bear in mind that while this process will work on piped input (possibly with minor modifications to file opening), it requires your entire input file to be stored in memory (in the 'reads' list). If you don't care about piped input and can afford a second sweep through the file, you can reduce the memory footprint substantially by only using the first sweep to count records, then the second sweep to print out the randomly selected sequences.

                Edit: Just noticed that it won't work with multi-line FASTA or FASTQ files, but for most NGS things you don't need to worry about that.

                Comment


                • #9
                  kga1978, thanks for posting the script. I was just working on a perl script to do the same, but being a beginner at scripting it was going to take me an annoying amount of time. So thanks and now I can do something more productive today!!!
                  --------------
                  Ethan

                  Comment


                  • #10
                    Hey gringer,

                    Thanks for your comments - very helpful and I'll make sure to pass them on. For now though, I don't really mind holding everything in memory - should make it faster and memory isn't a problem (for now....).

                    And actually, I won't work on stdin at the moment as it requires the .fasta or .fastq extension to work (this could be easily fixed by specifying file-type though...).

                    Comment


                    • #11
                      Just to follow up on the first part of my question, I now ran a couple of subsample experiments and compared them to the entire dataset as well as to 'head -n' and 'tail -n'. As noted by other posters, there are definitely biases when doing 'head' and 'tail'.

                      I subsampled 1 million reads from a total of 50 million and ran FastQC (to calculate avg. Q-score) and aligned the reads to hg19 using BWA. This is what I got:

                      Comment


                      • #12
                        Also, following gringer's suggestion, here's a less memory-hungry version of the downsampling script:

                        Code:
                        #!/usr/bin/env python
                        import random
                        import sys
                        
                        random.seed()
                        #name of the input file (fasta or fastq)
                        #assumes input file is standard fasta/fastq format
                        fileName = sys.argv[1]
                        #number of sequences to subsample
                        numSeq = int(sys.argv[2])
                        increment = 0
                        
                        #if it's a fasta file
                        if (fileName.find(".fasta") != -1):
                          increment = 2
                        #else if it's a fastq file
                        elif (fileName.find(".fastq") != -1):
                          increment = 4
                        #quit if neither
                        else:
                          sys.stdout.write("not a fasta/fastq file\n")
                          sys.exit()
                        
                        FILE = open(fileName, 'r')
                        totalReads = list()
                        index = 0
                        for line in FILE:
                          if(index % increment == 0):
                            totalReads.append(index/increment)
                          index += 1
                        FILE.close()
                        if(len(totalReads) < numSeq):
                          sys.stdout.write("You only have "+str(len(totalReads))+" reads!\n")
                          sys.exit()
                        
                        ttl = len(totalReads)
                        random.shuffle(totalReads)
                        totalReads = totalReads[0: numSeq]
                        totalReads.sort()
                        
                        FILE = open(fileName, 'r')
                        listIndex = 0
                        
                        for i in range(0, ttl):
                          curRead = ""
                          for j in range(0, increment):
                            curRead += FILE.readline()
                          if (i == totalReads[listIndex]):
                            sys.stdout.write(curRead)
                            listIndex += 1
                            if(listIndex == numSeq):
                              break
                        FILE.close()

                        Comment


                        • #13
                          Nobody asked for it, but I wanted it so maybe somebody else can use it, too.

                          Here's a rewrite of the python script that can now take two paired files as input and maintain pairing in the subsets. It can also still take just a single-end file as input. It's not pretty, but it does the job.

                          Changes: Now have to enter the number of reads desired first, then the input file name(s). Instead of going to stdout, the reads are output to out_1.fastq (or .fasta) and out_2.fastq (or .fasta). It would be relatively simple to put in an output prefix parameter, but this was good enough for my purposes.

                          Code:
                          #!/usr/bin/env python
                          import random
                          import sys
                          
                          random.seed()
                          #number of sequences to subsample
                          numSeq = int(sys.argv[1])
                          #name of the input file (fasta or fastq)
                          #assumes input file is standard fasta/fastq format
                          fileName1 = sys.argv[2]
                          fileName2 = sys.argv[3]
                          increment = 0
                          
                          #if it's a fasta file
                          if (fileName1.find(".fasta") != -1):
                            increment = 2
                          #else if it's a fastq file
                          elif (fileName1.find(".fastq") != -1):
                            increment = 4
                          #quit if neither
                          else:
                            sys.stdout.write("not a fasta/fastq file\n")
                            sys.exit()
                          
                          FILE1 = open(fileName1, 'r')
                          totalReads1 = list()
                          index = 0
                          for line in FILE1:
                            if(index % increment == 0):
                              totalReads1.append(index/increment)
                            index += 1
                          FILE1.close()
                          if(len(totalReads1) < numSeq):
                            sys.stdout.write("You only have "+str(len(totalReads))+" reads!\n")
                            sys.exit()
                            
                          if (fileName2):
                            FILE2 = open(fileName2, 'r')
                            totalReads2 = list()
                            index = 0
                            for line in FILE2:
                              if (index % increment == 0):
                                totalReads2.append(index/increment)
                              index += 1
                            FILE2.close()
                            if (len(totalReads1) != len(totalReads2)):
                              sys.stdout.write("read counts do not match\n")
                              sys.exit()
                          
                          ttl = len(totalReads1)
                          random.shuffle(totalReads1)
                          totalReads1 = totalReads1[0: numSeq]
                          totalReads1.sort()
                          
                          FILE1 = open(fileName1, 'r')
                          if (fileName2):
                            FILE2 = open(fileName2, 'r')
                          listIndex = 0
                          
                          if (increment == 4):
                            OUTFILE1 = open('out_1.fastq', 'w')
                            if (fileName2):
                              OUTFILE2 = open('out_2.fastq', 'w')
                          else:
                            OUTFILE1 = open('out_1.fasta', 'w')
                            if (fileName2):
                              OUTFILE2 = open('out_2.fasta', 'w')
                          
                          for i in range(0, ttl):
                            curRead1 = ""
                            curRead2 = ""
                            for j in range(0, increment):
                              curRead1 += FILE1.readline()
                              if (fileName2):
                                curRead2 += FILE2.readline()
                            if (i == totalReads1[listIndex]):
                              OUTFILE1.write(curRead1)
                              if (fileName2):
                                OUTFILE2.write(curRead2)
                              listIndex += 1
                              if(listIndex == numSeq):
                                break
                          FILE1.close()
                          if (fileName2):
                            FILE2.close()
                          OUTFILE1.close()
                          if (fileName2):
                            OUTFILE2.close()

                          Comment


                          • #14
                            Originally posted by kga1978 View Post
                            Sorry, yes, that was a typo - there's a gunzip in front of that.

                            So, a talented student wrote up a quick script in python for me and it works really well. It appears that it is about as fast as writing to disk, so overall what I was looking for. It takes the following argument:

                            Code:
                            python subsampler.py <filename> <number of sequences to extract>
                            The script is as follows:

                            Code:
                            #!/usr/bin/env python
                            import random
                            import sys
                            
                            #name of the input file (fasta or fastq)
                            #assumes input file is standard fasta/fastq format
                            fileName = sys.argv[1]
                            #number of sequences to subsample
                            numSeq = int(sys.argv[2])
                            increment = 0
                            
                            #if it's a fasta file
                            if (fileName.find(".fasta") != -1):
                              increment = 2
                            #else if it's a fastq file
                            elif (fileName.find(".fastq") != -1):
                              increment = 4
                            #quit if neither
                            else:
                              sys.stdout.write("not a fasta/fastq file\n")
                              sys.exit()
                            
                            FILE = open(fileName, 'r')
                            reads = list()
                            index = 0
                            #put the entire file in a list
                            for line in FILE:
                              if(index % increment == 0):
                                reads.append(line)
                              else:
                                reads[(index/increment)] += line
                              index += 1
                            FILE.close()
                            
                            #figure out total number of reads, error if applicable
                            totalReads = index/increment
                            if(totalReads < numSeq):
                              sys.stdout.write("You only have "+str(totalReads)+" reads!\n")
                              sys.exit()
                            
                            #shuffle the reads!
                            random.shuffle(reads)
                            #output the reads to stdout
                            for i in range(0, numSeq):
                              sys.stdout.write(reads[i])
                            Thank you kga1978. I ran this script successfully, yet the output was 1000 lines, not 1000 random sequences. I believe this is because my file is formatted like:
                            >File_Name
                            GACGAACGCTGGCGGCGTGCCTAATACATGCAAGTCGAGCGAGTCTGCCTTGAAGATCGG
                            AGTGCTTGCACTCTGTGAAACAAGATACAGGCTAGCGGCGGACGGGTGAGTAACACGTGG...
                            i.e., the sequence is not one continuous string, but rather a set of lines.
                            Do you know how to modify the code or modify my files to work with subsampler.py?

                            Comment


                            • #15
                              Edit: Just noticed that it won't work with multi-line FASTA or FASTQ files, but for most NGS things you don't need to worry about that.
                              So we have a situation that the script wasn't designed for. If you have fastx-toolkit handy, you can convert multi-line fasta files into single-line fasta files using the fasta-formatter tool, and then run through the script. Changing this script to work in a memory-efficient manner for multi-line fasta and fastq files would increase the complexity a reasonable amount.

                              A rough pseudo-code for this is as follows:

                              First pass -- counting records
                              1. Look for magic record start sequences, if found, increment the record counter
                              2. If a fasta file, read through and discard up to the next record start, return to the first step
                              3. If a fastq file, read up to the next '+' to work out sequence length
                              4. read the next line and discard
                              5. read as many quality values as were in the sequence, return to the first step

                              Second pass -- printing sequences
                              1. Generate n random numbers between 1 and the number of records
                              2. iterate through the sequences as before, but print the entire sequence record if the current record number matches to the random list

                              Comment

                              Latest Articles

                              Collapse

                              • seqadmin
                                Genetic Variation in Immunogenetics and Antibody Diversity
                                by seqadmin



                                The field of immunogenetics explores how genetic variations influence immune responses and susceptibility to disease. In a recent SEQanswers webinar, Oscar Rodriguez, Ph.D., Postdoctoral Researcher at the University of Louisville, and Ruben Martínez Barricarte, Ph.D., Assistant Professor of Medicine at Vanderbilt University, shared recent advancements in immunogenetics. This article discusses their research on genetic variation in antibody loci, antibody production processes,...
                                11-06-2024, 07:24 PM
                              • seqadmin
                                Choosing Between NGS and qPCR
                                by seqadmin



                                Next-generation sequencing (NGS) and quantitative polymerase chain reaction (qPCR) are essential techniques for investigating the genome, transcriptome, and epigenome. In many cases, choosing the appropriate technique is straightforward, but in others, it can be more challenging to determine the most effective option. A simple distinction is that smaller, more focused projects are typically better suited for qPCR, while larger, more complex datasets benefit from NGS. However,...
                                10-18-2024, 07:11 AM

                              ad_right_rmr

                              Collapse

                              News

                              Collapse

                              Topics Statistics Last Post
                              Started by seqadmin, Today, 11:09 AM
                              0 responses
                              22 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, Today, 06:13 AM
                              0 responses
                              20 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 11-01-2024, 06:09 AM
                              0 responses
                              30 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 10-30-2024, 05:31 AM
                              0 responses
                              21 views
                              0 likes
                              Last Post seqadmin  
                              Working...
                              X