ruffus.proxy_logger¶
Create proxy for logging for use with multiprocessing¶
These can be safely sent (marshalled) across process boundaries
Example 1¶
Set up logger from config file:
from proxy_logger import * args={} args["config_file"] = "/my/config/file" (logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args)
Example 2¶
Log to file "/my/lg.log" in the specified format (Time / Log name / Event type / Message).
Delay file creation until first log.
Only log Debug messages
Other alternatives for the logging threshold (args["level"]) include
- logging.DEBUG
- logging.INFO
- logging.WARNING
- logging.ERROR
- logging.CRITICAL
from proxy_logger import * args={} args["file_name"] = "/my/lg.log" args["formatter"] = "%(asctime)s - %(name)s - %(levelname)6s - %(message)s" args["delay"] = True args["level"] = logging.DEBUG (logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args)
Example 3¶
Rotate log files every 20 Kb, with up to 10 backups.
from proxy_logger import * args={} args["file_name"] = "/my/lg.log" args["rotating"] = True args["maxBytes"]=20000 args["backupCount"]=10 (logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args)
To use:¶
(logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args) with logging_mutex: my_log.debug('This is a debug message') my_log.info('This is an info message') my_log.warning('This is a warning message') my_log.error('This is an error message') my_log.critical('This is a critical error message') my_log.log(logging.DEBUG, 'This is a debug message')Note that the logging function exception() is not included because python stack trace information is not well-marshalled (pickled) across processes.
Proxies for a log:¶
Make a logging object called “logger_name” by calling logger_factory(args)
This function will return a proxy to the shared logger which can be copied to jobs in other processes, as well as a mutex which can be used to prevent simultaneous logging from happening.
Parameters: - logger_factory –
functions which creates and returns an object with the logging interface. setup_std_shared_logger() is one example of a logger factory.
- logger_name – name of log
- args – parameters passed (as a single argument) to logger_factory
Returns: a proxy to the shared logger which can be copied to jobs in other processes
Returns: a mutex which can be used to prevent simultaneous logging from happening
- logger_factory –
Create a logging object¶
This function is a simple around wrapper around the python logging module.
This logger_factory example creates logging objects which can then be managed by proxy via ruffus.proxy_logger.make_shared_logger_and_proxy()
This can be:
- a disk log file
- a automatically backed-up (rotating) log.
- any log specified in a configuration file
These are specified in the args dictionary forwarded by make_shared_logger_and_proxy()
Parameters: - logger_name – name of log
- args –
a dictionary of parameters forwarded from make_shared_logger_and_proxy()
Valid entries include:
- "level"
Sets the threshold for the logger.
- "config_file"
The logging object is configured from this configuration file.
- "file_name"
Sets disk log file name.
- "rotating"
Chooses a (rotating) log.
- "maxBytes"
Allows the file to rollover at a predetermined size
- "backupCount"
If backupCount is non-zero, the system will save old log files by appending the extensions .1, .2, .3 etc., to the filename.
- "delay"
Defer file creation until the log is written to.
- "formatter"
Converts the message to a logged entry string. For example,
"%(asctime)s - %(name)s - %(levelname)6s - %(message)s"