See Decorators for more decorators

Pipeline functions

There are only four functions for Ruffus pipelines:

pipeline_run

pipeline_run ( target_tasks = [], forcedtorun_tasks = [], multiprocess = 1, logger = stderr_logger, gnu_make_maximal_rebuild_mode = True, verbose =1, runtime_data = None, one_second_per_job = True, touch_files_only = False, exceptions_terminate_immediately = None, log_exceptions = None, history_file = None, checksum_level = None, multithread = 0, verbose_abbreviated_path = None)

Purpose:

Runs all specified pipelined functions if they or any antecedent tasks are incomplete or out-of-date.

Example:

#
#   Run task2 whatever its state, and also task1 and antecedents if they are incomplete
#   Do not log pipeline progress messages to stderr
#
pipeline_run([task1, task2], forcedtorun_tasks = [task2], logger = blackhole_logger)

Parameters:

  • target_tasks

    Pipeline functions and any necessary antecedents (specified implicitly or with @follows) which should be invoked with the appropriate parameters if they are incomplete or out-of-date.

  • forcedtorun_tasks

    Optional. These pipeline functions will be invoked regardless of their state. Any antecedents tasks will also be executed if they are out-of-date or incomplete.

  • multiprocess

    Optional. The number of processes which should be dedicated to running in parallel independent tasks and jobs within each task. If multiprocess is set to 1, the pipeline will execute in the main process.

  • multithread

    Optional. The number of threads which should be dedicated to running in parallel independent tasks and jobs within each task. Should be used only with drmaa. Otherwise the CPython global interpreter lock (GIL) will slow down your pipeline

  • logger

    For logging messages indicating the progress of the pipeline in terms of tasks and jobs. Defaults to outputting to sys.stderr. Setting logger=blackhole_logger will prevent any logging output.

  • gnu_make_maximal_rebuild_mode

    Warning

    This is a dangerous option. Use rarely and with caution

    Optional parameter governing how Ruffus determines which part of the pipeline is out of date and needs to be re-run. If set to False, ruffus will work back from the target_tasks and only execute the pipeline after the first up-to-date tasks that it encounters. For example, if there are four tasks:

    #
    #   task1 -> task2 -> task3 -> task4 -> task5
    #
    target_tasks = [task5]
    

    If task3() is up-to-date, then only task4() and task5() will be run. This will be the case even if task2() and task1() are incomplete.

    This allows you to remove all intermediate results produced by task1 -> task3.

  • verbose

    Optional parameter indicating the verbosity of the messages sent to logger: (Defaults to level 1 if unspecified)

    • level 0 : nothing
    • level 1 : Out-of-date Task names
    • level 2 : All Tasks (including any task function docstrings)
    • level 3 : Out-of-date Jobs in Out-of-date Tasks, no explanation
    • level 4 : Out-of-date Jobs in Out-of-date Tasks, with explanations and warnings
    • level 5 : All Jobs in Out-of-date Tasks, (include only list of up-to-date tasks)
    • level 6 : All jobs in All Tasks whether out of date or not
    • level 10: logs messages useful only for debugging ruffus pipeline code

    verbose >= 10 are intended for debugging Ruffus by the developers and the details are liable to change from release to release

  • runtime_data

    Experimental feature for passing data to tasks at run time

  • one_second_per_job

    To work around poor file timepstamp resolution for some file systems. Defaults to True if checksum_level is 0 forcing Tasks to take a minimum of 1 second to complete. If your file system has coarse grained time stamps, you can turn on this delay by setting one_second_per_job to True

  • touch_files_only

    Create or update output files only to simulate the running of the pipeline. Does not invoke real task functions to run jobs. This is most useful to force a pipeline to acknowledge that a particular part is now up-to-date.

    This will not work properly if the identities of some files are not known before hand, and depend on run time. In other words, not recommended if @split or custom parameter generators are being used.

  • exceptions_terminate_immediately

    Exceptions cause immediate termination of the pipeline.

  • log_exceptions

    Print exceptions to the logger as soon as they occur.

  • history_file

    The database file which stores checksums and file timestamps for input/output files. Defaults to .ruffus_history.sqlite if unspecified

  • checksum_level

    Several options for checking up-to-dateness are available: Default is level 1.

    • level 0 : Use only file timestamps
    • level 1 : above, plus timestamp of successful job completion
    • level 2 : above, plus a checksum of the pipeline function body
    • level 3 : above, plus a checksum of the pipeline function default arguments and the additional arguments passed in by task decorators
  • verbose_abbreviated_path

    Whether input and output paths are abbreviated. Defaults to 2 if unspecified

    • level 0: The full (expanded, abspath) input or output path
    • level > 1: The number of subdirectories to include. Abbreviated paths are prefixed with [,,,]/
    • level < 0: Input / Output parameters are truncated to MMM letters where verbose_abbreviated_path ==-MMM. Subdirectories are first removed to see if this allows the paths to fit in the specified limit. Otherwise abbreviated paths are prefixed by <???>

pipeline_printout

pipeline_printout (output_stream = sys.stdout, target_tasks = [], forcedtorun_tasks = [], verbose = 1, indent = 4, gnu_make_maximal_rebuild_mode = True, wrap_width = 100, runtime_data = None, checksum_level = None, history_file = None, verbose_abbreviated_path = None)

Purpose:

Prints out all the pipelined functions which will be invoked given specified target_tasks without actually running the pipeline. Because this is a simulation, some of the job parameters may be incorrect. For example, the results of a @split operation is not predetermined and will only be known after the pipelined function splits up the original data. Parameters of all downstream pipelined functions will be changed depending on this initial operation.
Example:
#
#   Simulate running task2 whatever its state, and also task1 and antecedents
#     if they are incomplete
#   Print out results to STDOUT
#
pipeline_printout(sys.stdout, [task1, task2], forcedtorun_tasks = [task2], verbose = 1)

Parameters:

  • output_stream

    Where to printout the results of simulating the running of the pipeline.

  • target_tasks

    As in pipeline_run: Pipeline functions and any necessary antecedents (specified implicitly or with @follows) which should be invoked with the appropriate parameters if they are incomplete or out-of-date.

  • forcedtorun_tasks

    As in pipeline_run:These pipeline functions will be invoked regardless of their state. Any antecedents tasks will also be executed if they are out-of-date or incomplete.

  • verbose

    Optional parameter indicating the verbosity of the messages sent to logger: (Defaults to level 4 if unspecified)

    • level 0 : nothing
    • level 1 : Out-of-date Task names
    • level 2 : All Tasks (including any task function docstrings)
    • level 3 : Out-of-date Jobs in Out-of-date Tasks, no explanation
    • level 4 : Out-of-date Jobs in Out-of-date Tasks, with explanations and warnings
    • level 5 : All Jobs in Out-of-date Tasks, (include only list of up-to-date tasks)
    • level 6 : All jobs in All Tasks whether out of date or not
    • level 10: logs messages useful only for debugging ruffus pipeline code

    verbose >= 10 are intended for debugging Ruffus by the developers and the details are liable to change from release to release

  • indent

    Optional parameter governing the indentation when printing out the component job parameters of each task function.

  • gnu_make_maximal_rebuild_mode

    Warning

    This is a dangerous option. Use rarely and with caution

    See explanation in pipeline_run.

  • wrap_width

    Optional parameter governing the length of each line before it starts wrapping around.

  • runtime_data

    Experimental feature for passing data to tasks at run time

  • history_file

    The database file which stores checksums and file timestamps for input/output files. Defaults to .ruffus_history.sqlite if unspecified

  • checksum_level

    Several options for checking up-to-dateness are available: Default is level 1.

    • level 0 : Use only file timestamps
    • level 1 : above, plus timestamp of successful job completion
    • level 2 : above, plus a checksum of the pipeline function body
    • level 3 : above, plus a checksum of the pipeline function default arguments and the additional arguments passed in by task decorators
  • verbose_abbreviated_path

    Whether input and output paths are abbreviated. Defaults to 2 if unspecified

    • level 0: The full (expanded, abspath) input or output path
    • level > 1: The number of subdirectories to include. Abbreviated paths are prefixed with [,,,]/
    • level < 0: Input / Output parameters are truncated to MMM letters where verbose_abbreviated_path ==-MMM. Subdirectories are first removed to see if this allows the paths to fit in the specified limit. Otherwise abbreviated paths are prefixed by <???>

pipeline_printout_graph

pipeline_printout_graph (stream, output_format = None, target_tasks = [], forcedtorun_tasks = [], ignore_upstream_of_target = False, skip_uptodate_tasks = False, gnu_make_maximal_rebuild_mode = True, test_all_task_for_update = True, no_key_legend = False, minimal_key_legend = True, user_colour_scheme = None, pipeline_name = “Pipeline”, size = (11,8), dpi = 120, runtime_data = None, checksum_level = None, history_file = None)

Purpose:

Prints out flowchart of all the pipelined functions which will be invoked given specified target_tasks without actually running the pipeline.

See Flowchart colours

Example:
pipeline_printout_graph("flowchart.jpg", "jpg", [task1, task16],
                            forcedtorun_tasks = [task2],
                            no_key_legend = True)

Customising appearance:

The user_colour_scheme parameter can be used to change flowchart colours. This allows the default Colour Schemes to be set. An example of customising flowchart appearance is available (see code) .

Parameters:

  • stream

    The file or file-like object to which the flowchart should be printed. If a string is provided, it is assumed that this is the name of the output file which will be opened automatically.

  • output_format

    If missing, defaults to the extension of the stream file name (i.e. jpg for a.jpg)

    If the programme dot can be found on the executio path, this can be any number of formats supported by Graphviz, including, for example, jpg, png, pdf, svg etc.
    Otherwise, ruffus will only output without error in the dot format, which is a plain-text graph description language.
  • target_tasks

    As in pipeline_run: Pipeline functions and any necessary antecedents (specified implicitly or with @follows) which should be invoked with the appropriate parameters if they are incomplete or out-of-date.

  • forcedtorun_tasks

    As in pipeline_run:These pipeline functions will be invoked regardless of their state. Any antecedents tasks will also be executed if they are out-of-date or incomplete.

  • draw_vertically

    Draw flowchart in vertical orientation

  • ignore_upstream_of_target

    Start drawing flowchart from specified target tasks. Do not draw tasks which are downstream (subsequent) to the targets.

  • ignore_upstream_of_target

    Do not draw up-to-date / completed tasks in the flowchart unless they are lie on the execution path of the pipeline.

  • gnu_make_maximal_rebuild_mode

    Warning

    This is a dangerous option. Use rarely and with caution

    See explanation in pipeline_run.

  • test_all_task_for_update
    Indicates whether intermediate tasks are out of date or not. Normally Ruffus will stop checking dependent tasks for completion or whether they are out-of-date once it has discovered the maximal extent of the pipeline which has to be run.
    For displaying the flow of the pipeline, this is hardly very informative.
  • no_key_legend

    Do not include key legend explaining the colour scheme of the flowchart.

  • minimal_key_legend

    Do not include unused task types in key legend.

  • user_colour_scheme

    Dictionary specifying colour scheme for flowchart

    See complete list of Colour Schemes.

    Colours can be names e.g. "black" or quoted hex e.g. '"#F6F4F4"' (note extra quotes)
    Default values will be used unless specified
    key Subkey  
    • 'colour_scheme_index'
    index of default colour scheme,
    0-7, defaults to 0 unless specified
     
    • 'Final target'
    • 'Explicitly specified task'
    • 'Task to run'
    • 'Down stream'
    • 'Up-to-date Final target'
    • 'Up-to-date task forced to rerun'
    • 'Up-to-date task'
    • 'Vicious cycle'
    • 'fillcolor'
    • 'fontcolor'
    • 'color'
    • 'dashed' = 0/1
    Colours / attributes for each task type
    • 'Vicious cycle'
    • 'Task to run'
    • 'Up-to-date'
    • 'linecolor'
    Colours for arrows between tasks
    • 'Pipeline'
    • 'fontcolor'
    Flowchart title colour
    • 'Key'
    • 'fontcolor'
    • 'fillcolor'
    Legend colours

    Example:

    Use colour scheme index = 1

    pipeline_printout_graph ("flowchart.svg", "svg", [final_task],
                             user_colour_scheme = {
                                                    "colour_scheme_index" :1,
                                                    "Pipeline"      :{"fontcolor" : '"#FF3232"' },
                                                    "Key"           :{"fontcolor" : "Red",
                                                                      "fillcolor" : '"#F6F4F4"' },
                                                    "Task to run"   :{"linecolor" : '"#0044A0"' },
                                                    "Final target"  :{"fillcolor" : '"#EFA03B"',
                                                                      "fontcolor" : "black",
                                                                      "dashed"    : 0           }
                                                   })
    
  • pipeline_name

    Specify title for flowchart

  • size

    Size in inches for flowchart

  • dpi

    Resolution in dots per inch. Ignored for svg output

  • runtime_data

    Experimental feature for passing data to tasks at run time

  • history_file

    The database file which stores checksums and file timestamps for input/output files. Defaults to .ruffus_history.sqlite if unspecified

  • checksum_level

    Several options for checking up-to-dateness are available: Default is level 1.

    • level 0 : Use only file timestamps
    • level 1 : above, plus timestamp of successful job completion
    • level 2 : above, plus a checksum of the pipeline function body
    • level 3 : above, plus a checksum of the pipeline function default arguments and the additional arguments passed in by task decorators

pipeline_get_task_names

pipeline_get_task_names ()

Purpose:

Returns a list of all task names in the pipeline without running the pipeline or checking to see if the tasks are connected correctly

Example:

Given:

from ruffus import *

@originate([])
def create_data(output_files):
    pass

@transform(create_data, suffix(".txt"), ".task1")
def task1(input_files, output_files):
    pass

@transform(task1, suffix(".task1"), ".task2")
def task2(input_files, output_files):
    pass

Produces a list of three task names:

>>> pipeline_get_task_names ()
['create_data', 'task1', 'task2']