Skip to content

Logging¤

ASAB logging is built on top of the logging module. It provides standard logging to stderr output when running on a console, and also file and syslog output (both RFC5424 and RFC3164) for background mode of operations.

Log timestamps are captured with sub-second precision (depending on the system capabilities) and displayed including microsecond part.

We recommend to create a logger L in every module that captures all necessary logging output. Alternative logging strategies are also supported.

Example

The following code

myapp/mymodule.py
import logging

L = logging.getLogger(__name__)

...

L.warning("Hello world!")

will produce the following output to the console:

25-Mar-2018 23:33:58.044595 WARNING myapp.mymodule Hello world!

Logging Levels¤

ASAB uses Python logging levels with the addition of LOG_NOTICE level. LOG_NOTICE level is similar to logging.INFO level but it is visible in even in non-verbose mode.

Level Numeric value Syslog Severity level
CRITICAL 50 Critical / crit / 2
ERROR 40 Error / err / 3
WARNING 30 Warning / warning / 4
LOG_NOTICE 25 Notice / notice / 5
INFO 20 Informational / info / 6
DEBUG 10 Debug / debug / 7
NOTSET 0

Usage of the LOG_NOTICE level:

L.log(asab.LOG_NOTICE, "This message will be visible regardless verbose configuration.")

Setting the level for the entire application:

[logging]
level = DEBUG

Setting levels for particular modules:

[logging]
levels=
    myApp.module1 DEBUG
    myApp.module2 WARNING

Each logger must be on a separate line. The logger name and the corresponding logging level have to be separated by a space.

Verbose mode¤

By default, only logs with a level greater than 20 are visible, so DEBUG and INFO levels are not displayed. For accessing these levels, the application must be started with the verbose mode. This is done by the command-line argument -v.

Example

There are three ways to display DEBUG and INFO log messages.

  1. By enabling the verbose mode with command-line argument:

    python3 myapp.py -v
    
  2. By accessing the asab.Config["logging"]["verbose"] boolean option:

    [logging]
    verbose = true
    

    The string true is interpreted by ConfigParser via this method.

  3. By setting the logging level directly:

    [logging]
    level = DEBUG
    

Note

The verbose mode also enables asyncio and Zookeeper debug logging.

Structured data¤

ASAB supports a structured data to be added to a log entry. Structured data is passed to the log message as a dictionary that has to be JSON-serializable.

Example

The following warning message with structured data

myapp/mymodule.py
L.warning("Hello world!", struct_data={'key1':'value1', 'key2':2})

will produce the following output to the console:

25-Mar-2018 23:33:58.044595 WARNING myapp.mymodule [sd key1="value1" key2="2"] Hello world!

Info

ASAB structured data logging follows the RFC 5424.

Tip

Use logging with structured data to provide better error messages:

myapp/mymodule.py
try:
    print(a / b)
except ZeroDivisionError:
    L.error("Division by zero.", struct_data={"a": a, "b": b})

Output:

27-Jul-2023 16:54:22.311522 ERROR myapp.mymodule [sd a="24" b="0"] Division by zero.

Logging to console¤

The ASAB application will log to the console only if it detects that it is running on the terminal (using os.isatty() function) or if the environment variable ASABFORCECONSOLE is set to 1. This is useful setup for eg. PyCharm.

Logging to a file¤

The command-line argument -l enables logging to a file. Non-empty path option in the section [logging:file] of the configuration file also enables logging to a file.

Example of the configuration section:

[logging:file]
path=/var/log/asab.log
format="%(asctime)s %(levelname)s %(name)s %(struct_data)s%(message)s",
datefmt="%d-%b-%Y %H:%M:%S.%f"
backup_count=3
rotate_every=1d

Tip

When the deployment expects more instances of the same application to be logging into the same file, it is recommended to use ${HOSTNAME} variable in the file path:

[logging:file]
path=/var/log/${HOSTNAME}/asab.log

In this way, the applications will log to separate log files in different folders, which is an intended behavior, since race conditions may occur when different application instances log into the same file.

Log rotation¤

ASAB supports a log rotation. The log rotation is triggered by a UNIX signal SIGHUP, which can be used e.g. to integrate with logrotate utility. It is implemented using logging.handlers.RotatingFileHandler from the Python standard library. Also, a time-based log rotation can be configured using rotate_every option.

Option Meaning
backup_count A number of old files to be kept prior their removal. The system will save old log files by appending the extensions '.1', '.2' etc., to the filename.
rotate_every Time interval of a log rotation. Default value is empty string, which means that the time-based log rotation is disabled. The interval is specified by an integer value and an unit, e.g. 1d (for 1 day) or 30M (30 minutes). Known units are H for hours, M for minutes, d for days and s for seconds.

Logging to syslog¤

The command-line argument -s enables logging to syslog.

A configuration section [logging:syslog] can be used to specify details about desired syslog logging.

Example of the configuration file section:

[logging:syslog]
enabled=true
format=5
address=tcp://syslog.server.lan:1554/
  • enabled is equivalent to command-line switch -s and it enables syslog logging target.

  • format specifies which logging format will be used.

Option Format Note
5 syslog format RFC 5424
5micro syslog format RFC 5424 with microseconds
3 the old BSD syslog format RFC 3164 It is typically used by /dev/log.
m Mac OSX syslog flavour It is based on BSD syslog format but it is not fully compatible.

The default value is 3 on Linux and m on Mac OSX.

  • address specifies the location of the Syslog server. It could be a UNIX path such as /dev/log or URL. Possible URL values:

  • tcp://syslog.server.lan:1554/ for Syslog over TCP

  • udp://syslog.server.lan:1554/ for Syslog over UDP
  • unix-connect:///path/to/syslog.socket for Syslog over UNIX socket (stream)
  • unix-sendto:///path/to/syslog.socket for Syslog over UNIX socket (datagram), equivalent to /path/to/syslog.socket, used by a /dev/log.

The default value is a /dev/log on Linux or /var/run/syslog on Mac OSX.

Logging of obsolete features¤

It proved to be essential to inform operators about features that are going to be obsoleted. ASAB offers the unified asab.LogObsolete logger which can indicate that a particular feature is marked as obsolete thru logs. Such a log message can then be grepped from logs uniformly.

Example

asab.LogObsolete.warning("Use of the obsolete function", struct_data={'eol':'2022-31-12'})

produces the output:

21-Jul-2022 14:32:40.983884 WARNING OBSOLETE [eol="2022-31-12"] Use of the obsolete function

We suggest to add the eol attribute in the struct_data of the log with a YYYY-MM-DD date/time of the planned end of life of the feature.

Tip

We suggest automating the detection of obsolete warnings in logs so that the operations are informed well ahead of the actual removal of the feature. The string to seek in logs is "OBSOLETE".

Reference¤

Command-line arguments¤

Argument Action
-v Enables verbose logging so that DEBUG and INFO levels are visible.
-s Enables logging to syslog.

asab.log.LOG_NOTICE = 25 module-attribute ¤

Info log level that is visible in non-verbose mode. It should not be used for warnings and errors.

asab.log.Logging ¤

Bases: object

Source code in asab/log.py
class Logging(object):


	def __init__(self, app):
		self.RootLogger = logging.getLogger()

		self.ConsoleHandler = None
		self.FileHandler = None
		self.SyslogHandler = None

		if not self.RootLogger.hasHandlers():

			# Add console logger if needed
			if os.isatty(sys.stdout.fileno()) or os.environ.get('ASABFORCECONSOLE', '0') != '0' or Config.getboolean("logging:console", "enabled"):
				self._configure_console_logging()

			# Initialize file handler
			file_path = Config["logging:file"]["path"]

			if len(file_path) > 0:

				# Ensure file path
				directory = os.path.dirname(file_path)
				if not os.path.exists(directory):
					os.makedirs(directory)

				self.FileHandler = logging.handlers.RotatingFileHandler(
					file_path,
					backupCount=Config.getint("logging:file", "backup_count"),
					maxBytes=Config.getint("logging:file", "backup_max_bytes"),
				)
				self.FileHandler.setLevel(logging.DEBUG)
				self.FileHandler.setFormatter(StructuredDataFormatter(
					fmt=Config["logging:file"]["format"],
					datefmt=Config["logging:file"]["datefmt"],
					sd_id=Config["logging"]["sd_id"],
				))
				self.RootLogger.addHandler(self.FileHandler)

				rotate_every = Config.get("logging:file", "rotate_every")
				if rotate_every != '':
					rotate_every = re.match(r"^([0-9]+)([dMHs])$", rotate_every)
					if rotate_every is not None:
						i, u = rotate_every.groups()
						i = int(i)
						if i <= 0:
							self.RootLogger.error("Invalid 'rotate_every' configuration value.")
						else:
							if u == 'H':
								i = i * 60 * 60
							elif u == 'M':
								i = i * 60
							elif u == 'd':
								i = i * 60 * 60 * 24
							elif u == 's':
								pass

							# PubSub is not ready at this moment, we need to create timer in a future
							async def schedule(app, interval):
								self.LogRotatingTime = Timer(app, self._on_tick_rotate_check, autorestart=True)
								self.LogRotatingTime.start(i)
							asyncio.ensure_future(schedule(app, i))

					else:
						self.RootLogger.error("Invalid 'rotate_every' configuration value.")

			# Initialize syslog
			if Config["logging:syslog"].getboolean("enabled"):

				address = Config["logging:syslog"]["address"]

				if address[:1] == '/':
					self.SyslogHandler = AsyncIOHandler(app.Loop, socket.AF_UNIX, socket.SOCK_DGRAM, address)

				else:
					url = urllib.parse.urlparse(address)

					if url.scheme == 'tcp':
						self.SyslogHandler = AsyncIOHandler(app.Loop, socket.AF_INET, socket.SOCK_STREAM, (
							url.hostname if url.hostname is not None else 'localhost',
							url.port if url.port is not None else logging.handlers.SYSLOG_UDP_PORT
						))

					elif url.scheme == 'udp':
						self.SyslogHandler = FormatingDatagramHandler(
							url.hostname if url.hostname is not None else 'localhost',
							url.port if url.port is not None else logging.handlers.SYSLOG_UDP_PORT
						)

					elif url.scheme == 'unix-connect':
						self.SyslogHandler = AsyncIOHandler(app.Loop, socket.AF_UNIX, socket.SOCK_STREAM, url.path)

					elif url.scheme == 'unix-sendto':
						self.SyslogHandler = AsyncIOHandler(app.Loop, socket.AF_UNIX, socket.SOCK_DGRAM, url.path)

					else:
						self.RootLogger.warning("Invalid logging:syslog address '{}'".format(address))
						address = None

				if self.SyslogHandler is not None:
					self.SyslogHandler.setLevel(logging.DEBUG)
					format = Config["logging:syslog"]["format"]
					if format == 'm':
						self.SyslogHandler.setFormatter(MacOSXSyslogFormatter(sd_id=Config["logging"]["sd_id"]))
					elif format == '5':
						self.SyslogHandler.setFormatter(SyslogRFC5424Formatter(sd_id=Config["logging"]["sd_id"]))
					elif format == '5micro':
						self.SyslogHandler.setFormatter(SyslogRFC5424microFormatter(sd_id=Config["logging"]["sd_id"]))
					elif format == 'json':
						self.SyslogHandler.setFormatter(JSONFormatter())
					else:
						self.SyslogHandler.setFormatter(SyslogRFC3164Formatter(sd_id=Config["logging"]["sd_id"]))
					self.RootLogger.addHandler(self.SyslogHandler)

			# No logging is configured
			if self.ConsoleHandler is None and self.FileHandler is None and self.SyslogHandler is None:
				# Let's check if we run in Docker and if so, then log on stderr
				if running_in_container():
					self._configure_console_logging()

		else:
			self.RootLogger.warning("Logging seems to be already configured. Proceed with caution.")

		if Config["logging"].getboolean("verbose"):
			self.RootLogger.setLevel(logging.DEBUG)
		else:
			level_name = Config["logging"]["level"].upper()
			try:
				self.RootLogger.setLevel(_NAME_TO_LEVEL.get(level_name, level_name))
			except ValueError:
				L.error("Cannot detect logging level '{}'".format(level_name))

		# Fine-grained log level configurations
		levels = Config["logging"].get('levels')
		for level_line in levels.split('\n'):
			level_line = level_line.strip()
			if len(level_line) == 0 or level_line.startswith('#') or level_line.startswith(';'):
				# line starts with a comment
				continue
			try:
				logger_name, level_name = level_line.split(' ', 1)
			except ValueError:
				L.error("Cannot read line '{}' in '[logging] levels' section, expected format: 'logger_name level_name'.".format(level_line))
				continue
			level = _NAME_TO_LEVEL.get(level_name.upper(), level_name.upper())
			try:
				logging.getLogger(logger_name).setLevel(level)
			except ValueError:
				L.error("Cannot detect logging level '{}' for {} logger".format(level_name, logger_name))

	def rotate(self):
		if self.FileHandler is not None:
			self.RootLogger.log(LOG_NOTICE, "Rotating logs")
			self.FileHandler.doRollover()


	async def _on_tick_rotate_check(self):
		if self.FileHandler is not None:
			if self.FileHandler.stream.tell() > 1000:
				self.rotate()


	def _configure_console_logging(self):
		self.ConsoleHandler = logging.StreamHandler(stream=sys.stderr)

		# Disable colors when running in container
		if running_in_container():
			self.ConsoleHandler.setFormatter(StructuredDataFormatter(
				fmt=Config["logging:console"]["format"],
				datefmt=Config["logging:console"]["datefmt"],
				sd_id=Config["logging"]["sd_id"],
				use_color=False
			))
		else:
			self.ConsoleHandler.setFormatter(StructuredDataFormatter(
				fmt=Config["logging:console"]["format"],
				datefmt=Config["logging:console"]["datefmt"],
				sd_id=Config["logging"]["sd_id"],
				use_color=True
			))

		self.ConsoleHandler.setLevel(logging.DEBUG)
		self.RootLogger.addHandler(self.ConsoleHandler)