Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • aarthi.talla
    replied
    yes seems like it..
    So i just wanted to ask you if appending the shotgun files is a must ?

    have you done the appending and performed the assembly ?

    If NO, what are the softwares u suggest I do a 454 denovo with ?
    Since you converted to fastq, i assume you used an illumina assembler ?

    Leave a comment:


  • nathanhaigh
    replied
    It looks like you don't have enough memory to do this assembly. As mentioned before, try the miramem command to estimate the memory requirement for this assembly....that way you'll know if you're in the ball park of what is required by MIRA.
    Last edited by nathanhaigh; 08-17-2011, 04:48 PM. Reason: typo

    Leave a comment:


  • aarthi.talla
    replied
    The log file at the end showed this :


    Dynamic allocs: 0
    Align allocs: 0
    Out of memory detected, exception message is: std::bad_alloc

    You are running a 32 bit executable. Please note that the maximum
    theoretical memory a 32 bit programm can use (be it in Linux, Windows or
    other) is 4 GiB, in practice less: between 2.7 and 3.3 GiB. This is valid
    even if your machine has hundreds of GiB.
    Should your machine have more that 4 GiB, use a 64 bit OS and a 64 bit
    version of MIRA.


    ----

    So i downloaded the 64 bit of MIRA and after the command
    mira --project=3kb_norton --job=denovo,genome,accurate,454 -SK:mnr=yes:nrr=10 >&3kb_log_assembly.txt

    it said:
    tcmalloc: large alloc 2323759104 bytes == 0*1f5b9000 @

    so the number of bytes increased

    Leave a comment:


  • aarthi.talla
    replied
    Extracting the files from sff:

    sff_extract -c -l linker.fasta "insert_size:2500,insert_stdev:500" file1.sff file2.sff -o 3kb_norton

    This worked perfect and had given me the fasta, qual and xml..

    Appending shotgun files:

    sff_extract -a shotgun1.sff shotgun2.sff shotgun3.sff -o 3kb_norton

    This also appended the sff's and the sizes of the initial fasta,qual and xml files changed to a much larger size to allocate the shotgun seqs...

    Assembly:

    mira --project=3kb_norton --job=denovo,genome,accurate,454 -SK:mnr=yes:nrr=10 >&3kb_log_assembly.txt

    now this also showed no error.. it started running.. and after a while below the command it said:
    tcmalloc: large alloc 1482399744 bytes == 0*867e000 @

    and below this, after a while it said:
    Aborted

    Leave a comment:


  • nathanhaigh
    replied
    Originally posted by aarthi.talla View Post
    Thank you

    Do we have to perform the step of appending the shotgun files to the extracted paried end fasta files ? Or can we do the assembly of the extracted files by sff extract directly ?

    Because when I appended the shotgun files to the extracted paired end fasta files and performed the assembly , it shows memory allocation problem !! The memory of my linux machine is 7GB.. isnt that enough ? how much memory does mira require to perform the asembly ?

    may I know how u performed your assembly ? did u append the shotgun files or just did an assembly of the paired ends extracted fasta files ??

    Thanks
    I'm not sure exactly what you're trying to do....It would be helpful if you posted the MIRA commands you have tried and the exact error returned. That way there are no misinterpretations in communicating your questions.

    MIRA is an Overlap/Layout/Consensus (OLC) type assembler. They inherently require lots of memory for all but the smallest genomes. Try using the miramem command to estimate what the memory requirement is likely to be.

    Leave a comment:


  • aarthi.talla
    replied
    have you used MIRA to perform assembly ? If NO, then which wud u suggest ? Since you have converted the sff's to fastq, i assume you used an illumina denovo software??

    Leave a comment:


  • aarthi.talla
    replied
    Thank you

    Do we have to perform the step of appending the shotgun files to the extracted paried end fasta files ? Or can we do the assembly of the extracted files by sff extract directly ?

    Because when I appended the shotgun files to the extracted paired end fasta files and performed the assembly , it shows memory allocation problem !! The memory of my linux machine is 7GB.. isnt that enough ? how much memory does mira require to perform the asembly ?

    may I know how u performed your assembly ? did u append the shotgun files or just did an assembly of the paired ends extracted fasta files ??

    Thanks

    Leave a comment:


  • nathanhaigh
    replied
    Originally posted by aarthi.talla View Post
    Thankyou very much ! that was really helpful.

    I am sorry to bother you with all the questions.
    Can I ask you one last question.
    Yep, no worries!

    Originally posted by aarthi.talla View Post
    For the scaffolding with bambus is it necessary that we provide the mates file ?
    I have no experience with BAMBUS so can't really comment. However, I can point you to the online manual.


    Originally posted by aarthi.talla View Post
    If yes, since we cannot read the sff file, do all the sff's contain the format that i mentioned ? (.*)\.f (.*)\.r (with an 'f' and and 'r' to it) ? And can I just blindly assume to give in this ??

    may I know if you have provided the mates and the conf file ?

    Thanks !!
    I think there is some confusion about the SFF file. The Standard Flowgram Format (SFF) is a binary file containing the raw basecall information and quality values for reads. Generally speaking, you would have to extract the sequence data and associated quality values from the SFF file into a plain text format such as a FASTQ file or FASTA+QUAL files - these latter file formats are a more of a standard. In doing so, you'll also want to split the read into the pairs where you find the linker sequence. The tool sff_extract does this all for you and generates individual sequences for the paired ends, appending .f and .r to the end of the sequence name so that other software can easily identify which sequences are paired.

    e.g. you would have a simplified workflow something like this:
    Code:
    file.sff ----> sff_extract ----> file.fastq ----> chosen_assembly_tool ----> assembly_output
    However, as I said, I'm don't know about BAMBUS specifically.

    Here's some more resources you may find useful:

    Leave a comment:


  • aarthi.talla
    replied
    Thankyou very much ! that was really helpful.

    I am sorry to bother you with all the questions.
    Can I ask you one last question.

    For the scaffolding with bambus is it necessary that we provide the mates file ?
    If yes, since we cannot read the sff file, do all the sff's contain the format that i mentioned ? (.*)\.f (.*)\.r (with an 'f' and and 'r' to it) ? And can I just blindly assume to give in this ??

    may I know if you have provided the mates and the conf file ?

    Thanks !!

    Leave a comment:


  • nathanhaigh
    replied
    Originally posted by aarthi.talla View Post
    Also for the mates.file as I am required to provide the minimum insert size(which is mean of insert size-stddev) and maximum insert size(which is mean of insert size-stddev). I hope I got these right ?

    So for example to consider the mean insert size of the 3kb run, would it just be 3000 or would it be the average of the numbers 2247.3 and 2254.9 of the 2 sff files that I mentioned earlier ??
    Same is applied for the standard deviations. Do i again consider the averages ? (561.8+563.7/2) ??

    Did you setup a mates files yet for scaffolding ? If yes may I know how u set it up with respect to the naming convention?
    The size SD's are not actually calculated but are simply 0.25 * average insert size. Sorry, I can't be much help with this. However, you may find the following links useful:

    Leave a comment:


  • nathanhaigh
    replied
    Originally posted by aarthi.talla View Post
    And I would like to add about using shotgun sff files in the bambus scaffolding step.
    Please let me know if I got this right.

    When the sheared DNA fragments are circularized with an adaptor/linker, they are fragmented again. And some of these fragments will have the adaptor flanked by read pairs approx 150bp on each side, and there will be some other fragments with NO adaptor in between them obviously. So these frags with no adaptors are the shotgun sequences ?
    Which is why you provide the shotgun sequence sff files to bambus so that it will not miss out on that data ?
    Did I get it correct ?
    Almost, but some misunderstanding. There are different library preps for creating shotgun and paired end libraries. Have a look at 454 documentation on the creation of paired end libraries and you'll find that there is a step to enrich for those biotinylated DNA fragments containing linkers using Streptavidin beads. However you will still get DNA fragments that contain no linker. Reads not containing your linker means that it is not part of a pair. However, they can still be used as a single end read in an assembly. Some assembly software may handle these reads in the same FASTQ file as the pairs, but others may need you to pull them out into a separate FASTQ file - look at the documentation for your chosen assembler.

    Some links you might find useful

    Leave a comment:


  • nathanhaigh
    replied
    Originally posted by aarthi.talla View Post
    But they had done an initial assembly with newbler for us and in the newbler metrics, in the paired read status section a 'pairDistanceAvg' is given. So is that the insert size ?
    Once Newbler does the assembly and generates contigs, it calculates the average distance between reads in a pair to derive these statistics. NOTE: This is/can only be done for pairs that map to the same contig. Therefore it is an estimate of the actual distance separating read pairs in that library. It should be similar to the size of the library that was being prepared.

    Originally posted by aarthi.talla View Post
    for the 3kb library for sff file 1 - the pairdistanceavg = 2247.3, pairDistdev=561.8 and for sff file 2 pairdistavg is 2254.9 and pairDistdev 563.7.
    Why are these not 3kb ? And to enter the stddev, do i sum up both of them or do I take the average ??
    I don't have first hand experience with Newbler, but I'd think your pairdistanceavg and pairDistdev values are in the ball-park region for a 3kb library - maybe someone else will correct me?

    Originally posted by aarthi.talla View Post
    And since they are 2 sff files per run, for 'sff_extract' can I give in the 2 sff files as input along with the script u mentioned above and will it output the fasta, qual and xml files into just one file ??
    Have an experiment with sff_extract and the different command line arguments. Also do a sff_extract -h to view a list of options in your version of sff_extract. In the command I provided, the -Q options specifies that you want the sequence and qualities in a single FASTQ file - by default this info goes into 2 files: sequences into a FASTA file and the qualities into a QUAL file. I'd probably use your pairdistanceavg and pairDistdev values for each library as values for the insert_size and insert_stdev part of the -i option to sff_extract. This will add an estimate of your library's insert size and SD to the traceinfo XML file, which is used by some assemblers.

    Some links you, or readers of this post, might find useful:

    Leave a comment:


  • aarthi.talla
    replied
    Also for the mates.file as I am required to provide the minimum insert size(which is mean of insert size-stddev) and maximum insert size(which is mean of insert size-stddev). I hope I got these right ?

    So for example to consider the mean insert size of the 3kb run, would it just be 3000 or would it be the average of the numbers 2247.3 and 2254.9 of the 2 sff files that I mentioned earlier ??
    Same is applied for the standard deviations. Do i again consider the averages ? (561.8+563.7/2) ??

    Did you setup a mates files yet for scaffolding ? If yes may I know how u set it up with respect to the naming convention?

    Leave a comment:


  • aarthi.talla
    replied
    And I would like to add about using shotgun sff files in the bambus scaffolding step.
    Please let me know if I got this right.

    When the sheared DNA fragments are circularized with an adaptor/linker, they are fragmented again. And some of these fragments will have the adaptor flanked by read pairs approx 150bp on each side, and there will be some other fragments with NO adaptor in between them obviously. So these frags with no adaptors are the shotgun sequences ?
    Which is why you provide the shotgun sequence sff files to bambus so that it will not miss out on that data ?
    Did I get it correct ?

    Leave a comment:


  • aarthi.talla
    replied
    Thank you so much

    A comapany called 'Seqwright' sequenced the data for us and provided with 3kb, 8kb and 20kb libraries. So now i understand that the insert size is 3kb, 8kb and 20kb respectively.

    But they had done an initial assembly with newbler for us and in the newbler metrics, in the paired read status section a 'pairDistanceAvg' is given. So is that the insert size ?

    They have given 2 sff files per library, since they say- 'reason you have two files per run is because it’s sequenced on the DNA chip with two regions'.

    e.g for the 3kb library for sff file 1 - the pairdistanceavg = 2247.3, pairDistdev=561.8 and for sff file 2 pairdistavg is 2254.9 and pairDistdev 563.7.
    Why are these not 3kb ? And to enter the stddev, do i sum up both of them or do I take the average ??

    And since they are 2 sff files per run, for 'sff_extract' can I give in the 2 sff files as input along with the script u mentioned above and will it output the fasta, qual and xml files into just one file ??

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Essential Discoveries and Tools in Epitranscriptomics
    by seqadmin




    The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
    04-22-2024, 07:01 AM
  • seqadmin
    Current Approaches to Protein Sequencing
    by seqadmin


    Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
    04-04-2024, 04:25 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 04-25-2024, 11:49 AM
0 responses
17 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-24-2024, 08:47 AM
0 responses
17 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-11-2024, 12:08 PM
0 responses
62 views
0 likes
Last Post seqadmin  
Started by seqadmin, 04-10-2024, 10:19 PM
0 responses
60 views
0 likes
Last Post seqadmin  
Working...
X