Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • colindaven
    replied
    Nice job Simon, we like this tool very much. The wiki is great.

    Leave a comment:


  • Alex Renwick
    replied
    I do a lot of shell scripts and make files, and this Bpipe looks like it will make my life much easier.

    I have a problem using the torque queue. The process does not return after successfully executing one statement. For example, if I give it...

    Code:
    echo = {
      exec "echo this"
      exec "echo that"
    }
    Bpipe.run { echo }
    ...it would print "this" and then wait forever.

    This only happens when using torque. Incidentally, in order to get the queue to run had to export the QUEUE shell variable set to the name of the pbs queue.

    Any ideas about how to get it to work?

    Edit:

    I figured out how to get it to work. I configured the queue to keep complete jobs for a minute. It had been removing jobs from the queue immediately after completion, so the job status would never be shown as "completed". Now it's fixed.
    Last edited by Alex Renwick; 04-20-2012, 07:58 AM.

    Leave a comment:


  • linusvanpelt
    replied
    Hi Simon,

    thx for adding this issue to you develpment queue. I will you on this..

    Tobias

    Leave a comment:


  • simonzmmmmm
    replied
    Hi Tobias,
    Originally posted by linusvanpelt View Post
    Hi Simon,

    I am trying to do some parallelization with bpipe and hope you can help me out on a problem. Like in this example from the Wiki

    Code:
    Bpipe.run {
      chr(1..5) * [ hello ]
    }
    I would like to use the concept more general to do a parallelization task like this:
    ...

    Knowing that this is not working I was wondering if this could be implemented or resolved somehow?
    I was thinking exactly the same thought while I was implementing this feature - the ways of splitting things up to parallelize are quite arbitrary (by gene, by exon, by any arbitrary genomic coordinates, by anything at all ...). In the interest of expediency I made the first implementation specific to chromosome just to try out the idea, but I will definitely pursue a more generalized form of it. I've added an enhancement issue to track this so that you can get notified when progress is made on it:



    Thanks for the feedback!

    Simon

    Leave a comment:


  • linusvanpelt
    replied
    Hi Simon,

    I am trying to do some parallelization with bpipe and hope you can help me out on a problem. Like in this example from the Wiki

    Code:
    Bpipe.run {
      chr(1..5) * [ hello ]
    }
    I would like to use the concept more general to do a parallelization task like this:

    Code:
    @Transform("sam")
    align_stampy = {
            exec """
               python $STAMPY_HOME/stampy.py  
               --bwaoptions="-q10 $REFERENCE" 
               -g $STAMPY_GENOME_INDEX
               -h $STAMPY_HASH_FILE
               -M $input1,$input2
               -o $output 
               --readgroup=ID:$rg_id,LB:$rg_lb,PL:$rg_pl,PU:$rg_pu,SM:$rg_sm
    	   --processpart=[COLOR="red"]$part[/COLOR]
               """
    }
    
    Bpipe.run {
        [COLOR="Red"]part[/COLOR]("1/3", "2/3", "3/3") * [align_stampy]
    }
    Knowing that this is not working I was wondering if this could be implemented or resolved somehow?

    Thanks,
    Tobias

    Leave a comment:


  • simonzmmmmm
    replied
    Hi brentp,
    Originally posted by brentp View Post
    Looks pretty useful, could you explain more about:


    where, for example you have:

    Code:
    @Transform("bai")
    index = {
            exec "samtools index $input"
            return input
    }
    how does it know which $input to use? Does each step use the $output
    from the previous?
    This is the default; if you do nothing else, the output from a previous stage becomes the $input variable for the next stage.
    what if a given step needs multiple previous $output' s?
    Bpipe gives you a sort of "query language" to easily get back to any of the previous outputs. You can think of it as querying the tree of outputs in reverse using a very simple syntax, (but it is so simple that this is more of a mental model than a reality). So suppose you need the VCF file from a previous stage and a BAM file, and they are not already the default input. You can get at them like this:
    Code:
    exec "somecommand $input.vcf $input.bam"
    Which will expand to:
    Code:
    exec "somecommand most_recent_vcf.vcf most_recent_bam.bam"
    If you want all the BAMs from the most recent pipeline stage that produced a BAM file:
    Code:
    exec "somecommand $inputs.bam"
    (Notice the "input" has become "inputs"). The above will seach backward through pipeline stages until it finds a stage that produced one or more BAM files. Then it will expand to:
    Code:
    exec "somecommand file1.bam file2.bam file3.bam ..."
    You can use all the normal BASH constructs inside your commands too - so if you want to index every bam file:
    Code:
    exec "for i in $inputs.bam; do samtools index $i; done"
    (You'd probably want to do this a bit smarter and run them in parallel, but just for the sake of example).

    Cheers,

    Simon

    Leave a comment:


  • simonzmmmmm
    replied
    Hi maubp,
    Originally posted by maubp View Post
    That's why you should use Unix pipes where possible (ideally without compressing the intermediate BAM files, use -u in samtools).

    Does Bpipe support this? Perhaps it could using named pipes?
    Bpipe is file oriented so it does expect to see file at the output of each stage. In my usage, a single pipeline "stage" will often be several parts of the process piped together, and then the output of that arrives as a BAM file that is sort like a "checkpoint". That lets you restart or rerun parts of the analysis again from there. So you're not storing a BAM file for every single part of the process, but having them at several points in between is useful nonetheless. There is an open issue sort of related to this.

    Named pipes is a really interesting idea. I think at the moment it would be problematic because Bpipe expects the process for a pipeline stage to terminate before it will initiate the next stage. But with some tweaks that could be relaxed, to allow this mode of operation.

    I'll definitely put more thought into this - thanks for the discussion / ideas!

    Leave a comment:


  • adaptivegenome
    replied
    Originally posted by brentp View Post
    only if you don't care about keeping around intermediate files, in case the pipeline adds a step in there or you change some parameters in a later step and don't want to re-run the entire pipeline.
    Yes I agree. I guess I was wondering if there was a rationale for a more complex plugin framework. Seems like for the most part there is not one.

    Leave a comment:


  • brentp
    replied
    Originally posted by genericforms View Post
    If so, then this would negate the need for plug-ins.
    only if you don't care about keeping around intermediate files, in case the pipeline adds a step in there or you change some parameters in a later step and don't want to re-run the entire pipeline.

    Leave a comment:


  • adaptivegenome
    replied
    Originally posted by maubp View Post
    That's why you should use Unix pipes where possible (ideally without compressing the intermediate BAM files, use -u in samtools). Does Bpipe support this? Perhaps it could using named pipes?
    So thinking about the process from streaming the output SAM from the mapper all the way to the final step in which a recalibrated and realigned BAM is ready for mutation calling, is it is simply sufficient to pipe all the intermediary steps?

    If so, then this would negate the need for plug-ins.

    Leave a comment:


  • brentp
    replied
    Looks pretty useful, could you explain more about:


    where, for example you have:

    Code:
    @Transform("bai")
    index = {
            exec "samtools index $input"
            return input
    }
    how does it know which $input to use? Does each step use the $output
    from the previous?
    what if a given step needs multiple previous $output' s?
    I guess it's not clear to me what's going on with the the $input and $output names.

    Leave a comment:


  • maubp
    replied
    Originally posted by genericforms View Post
    So much time is wasted on writing a BAM file over and over.
    That's why you should use Unix pipes where possible (ideally without compressing the intermediate BAM files, use -u in samtools). Does Bpipe support this? Perhaps it could using named pipes?

    Leave a comment:


  • adaptivegenome
    replied
    I think this is a great tool for building pipelines. Thanks for sharing. I think it would be awesome if we could eventually adopt a similar approach to be analysis framework, where if tools are written as plugins, a BAM file could open ed once and then operated on with several plugins, and written once.

    So much time is wasted on writing a BAM file over and over.

    Leave a comment:


  • Bpipe: a new tool for running analysis pipelines

    Hello all,

    I would like to let everyone know about Bpipe, a new tool we have created to help run bioinformatics pipelines.

    Many people will be familiar with tools like Galaxy and Taverna, etc. that help you run pipelines and give a graphical view of the pipeline, inputs, outputs and many other features to make analysis pipelines more robust and manageable. Bpipe is similar in many ways but aimed at users who are command line oriented. It lets you write your pipelines almost like how you write a shell script, but it automatically adds features such as:
    • Transactional management of tasks - commands that fail get outputs cleaned up, log files saved and the pipeline cleanly aborted.
    • Automatic connection of pipeline stages - Bpipe manages the file names for input and output of each stage in a systematic way so that you don't need to think about it
    • Easy stopping or restarting - when a job fails it is easy to cleanly restart from the point of failure
    • Audit trail - Bpipe keeps a journal of exactly which commands executed and what their inputs and outputs were
    • Modularity - It's easy to make a library of pipeline stages (or commands) that you frequently use and mix and match them in different pipelines
    • Parallelism - easily run many samples/files at the same time or split one sample and run analysis on many parts of it in parallel
    • Integration with cluster resource managers - Bpipe supports PBS/Torque and more systems can be added easily
    • Notifications - Bpipe can send you alerts by email or instant message to tell you when your pipeline finishes or even as each stage completes.

    Bpipe is BSD licensed and available, along with documentation and examples, at http://bpipe.org. We also have a publication accepted in Bioinformatics which may be of interest as well.

    Bpipe is very young and I hope to make many improvements, so I would love to have feedback from anybody here about it.

    Thanks!

    Simon

Latest Articles

Collapse

  • seqadmin
    Recent Advances in Sequencing Analysis Tools
    by seqadmin


    The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
    05-06-2024, 07:48 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Today, 10:28 AM
0 responses
7 views
0 likes
Last Post seqadmin  
Started by seqadmin, Today, 07:35 AM
0 responses
10 views
0 likes
Last Post seqadmin  
Started by seqadmin, Yesterday, 02:06 PM
0 responses
8 views
0 likes
Last Post seqadmin  
Started by seqadmin, 05-14-2024, 07:03 AM
0 responses
28 views
0 likes
Last Post seqadmin  
Working...
X