Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • lwebs
    replied
    Thank you! You have been a tremendous help!

    Leave a comment:


  • Brian Bushnell
    replied
    Don't cat paired and unpaired reads. For Megahit, you need to use the -r flag, like this:

    Code:
    megahit --12 paired.fq -r singletons.fq

    Leave a comment:


  • lwebs
    replied
    I am also looking for programs/ scripts that would allow me to combine both the merged and orphaned PE reads into one file to use for assembly via Megahit. Any suggestions?

    I tried to cat the files together and megahit rejected the file with the output 'number of paired-end files not match!'.

    Leave a comment:


  • lwebs
    replied
    Thanks! found them!

    Leave a comment:


  • Brian Bushnell
    replied
    The output files should be in your working directory, the same directory as the input files. What do you get when you run "ls *.f*"?

    Leave a comment:


  • GenoMax
    replied
    The result files should have gone to the directory you ran the command from. Unless there was an error (i.e. you don't have write permission to the directory original data is in).
    Last edited by GenoMax; 05-03-2017, 09:59 AM.

    Leave a comment:


  • lwebs
    replied
    Thank you for the advice Brian. I am trying out bbtools (bbduk and bbmerge).

    I just got bbduk to run, but now I can't find the output files on my system . . . do I have to have existing directories to accept these files?

    Below is the command I just ran:
    bbduk.sh in1=1_ATGAGGCCAC_L007_R1_001.fastq in2=1_ATGAGGCCAC_L007_R2_001.fastq out1=1_cleanR1.fq out2=1_cleanR2.fq ref=/data/laura/Extracted_Metagenomes/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 tpe tbo

    Leave a comment:


  • Brian Bushnell
    replied
    Originally posted by lwebs View Post
    I am using the illumina-utils program to quality filter reads before de-novo assembly with the iu-filter-quality-minoche flag (see here for more info: https://github.com/merenlab/illumina-utils).

    So far, approximately 68% of both R1 and R2 pass the QC parameters while 32% fail (94% percent of failures due to R2).

    Here are my questions: Is this error rate and magnitude for read 2 normal
    That's extremely high. Either you have a failed sequencing run, or your threshold is much too strict. It would be useful to post a quality-score boxplot, though. Anyway, quality-trimming is generally better than filtering, as it both allows you to retain more useful data, and remove more bad data.

    Consulting your link:

    C33: less than 2/3 of bases were Q30 or higher in the first half of the read following the B-tail trimming
    That sounds too aggressive of a threshold for an optimal metagenome assembly; it will result in low genome recovery, and likely, higher fragmentation (though I encourage you to verify this yourself). I'd suggest something more like Q10 trimming of the right end (which you can do with BBDuk flags qtrim=r trimq=10), but the exact value depends on the dataset. Also, since adapter-trimming is universally positive while quality-trimming is more conditionally-positive, I encourage you to adapter-trim the data prior to doing anything else.

    Should I quality filter the reads prior to merging some of the reads (if only about 20% can be merged)?
    I recommend trimming rather than filtering, but I don't recommend either prior to merging. BBMerge, incidentally, can do iterative quality-trimming only for reads that fail to merge without trimming, which improves the merge rate. Blanket quality-trimming all reads prior to merging can increase false-positive merges and reduce the merge rate due to fewer overlapping pairs.

    Also, BBMerge can merge non-overlapping reads, if you have high enough coverage; this is useful in this kind of scenario where only 20% of the reads overlap due to a large average insert size.

    Can I use both merged reads and unmerged R1 and R2 for de novo assembly using Megahit?
    You should always use both merged and unmerged reads for assembly. But in my testing, while merging improves metagenomic assemblies from Spades and Ray, it does not improve them for Megahit, so I don't recommend it as a preprocessing step for Megahit.

    Leave a comment:


  • Quality-filtering shotgun metagenomic sequences from environmental samples advice

    Hello all!

    I am analyzing illumina Hiseq4000 - generated paired-end shotgun metagenomic sequences obtained from environmental samples. I am also new to shotgun metagnomic data, but have had experience analyzing 16S data.

    The reads are 150 nt in length and a majority of the fragment sizes range from 280-700 bp. A few samples have fragment sizes ranging from 80- 600 bp.

    I am using the illumina-utils program to quality filter reads before de-novo assembly with the iu-filter-quality-minoche flag (see here for more info: https://github.com/merenlab/illumina-utils).

    So far, approximately 68% of both R1 and R2 pass the QC parameters while 32% fail (94% percent of failures due to R2).

    Here are my questions: Is this error rate and magnitude for read 2 normal?
    Should I quality filter the reads prior to merging some
    of the reads (if only about 20% can be merged)?
    Can I use both merged reads and unmerged R1 and R2
    for de novo assembly using Megahit?

    Thanks for the help!
    Any guidance would be appreciated!

Latest Articles

Collapse

  • seqadmin
    Exploring the Dynamics of the Tumor Microenvironment
    by seqadmin




    The complexity of cancer is clearly demonstrated in the diverse ecosystem of the tumor microenvironment (TME). The TME is made up of numerous cell types and its development begins with the changes that happen during oncogenesis. “Genomic mutations, copy number changes, epigenetic alterations, and alternative gene expression occur to varying degrees within the affected tumor cells,” explained Andrea O’Hara, Ph.D., Strategic Technical Specialist at Azenta. “As...
    07-08-2024, 03:19 PM
  • seqadmin
    Exploring Human Diversity Through Large-Scale Omics
    by seqadmin


    In 2003, researchers from the Human Genome Project (HGP) announced the most comprehensive genome to date1. Although the genome wasn’t fully completed until nearly 20 years later2, numerous large-scale projects, such as the International HapMap Project and 1000 Genomes Project, continued the HGP's work, capturing extensive variation and genomic diversity within humans. Recently, newer initiatives have significantly increased in scale and expanded beyond genomics, offering a more detailed...
    06-25-2024, 06:43 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 07-10-2024, 07:30 AM
0 responses
23 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-03-2024, 09:45 AM
0 responses
198 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-03-2024, 08:54 AM
0 responses
209 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-02-2024, 03:00 PM
0 responses
191 views
0 likes
Last Post seqadmin  
Working...
X