This module defines functions and classes which implement a flexible event logging system for applications and libraries.,Do whatever it takes to actually log the specified logging record. This version is intended to be implemented by subclasses and so raises a NotImplementedError.,Informs the logging system to perform an orderly shutdown by flushing and closing all handlers. This should be called at application exit and no further use of the logging system should be made after this call.,Ensure all logging output has been flushed. This version does nothing and is intended to be implemented by subclasses.
>>>
import logging
>>>
logging.warning('Watch out!')
WARNING: root: Watch out!
Stack(most recent call last):
FORMAT = '%(asctime)s %(clientip)-15s %(user)-8s %(message)s'
logging.basicConfig(format = FORMAT)
d = {
'clientip': '192.168.0.1',
'user': 'fbloggs'
}
logger = logging.getLogger('tcpserver')
logger.warning('Protocol problem: %s', 'connection reset', extra = d)
2006 - 02 - 08 22: 20: 02, 165 192.168 .0 .1 fbloggs Protocol problem: connection reset
old_factory = logging.getLogRecordFactory()
def record_factory( * args, ** kwargs):
record = old_factory( * args, ** kwargs)
record.custom_attribute = 0xdecafbad
return record
logging.setLogRecordFactory(record_factory)
class MyLogger(logging.getLoggerClass()): #...override behaviour here
Best practice is, in each module, to have a logger defined like this:
import logging
logger = logging.getLogger(__name__)
near the top of the module, and then in other code in the module do e.g.
logger.debug('My message with %s', 'variable data')
If you need to subdivide logging activity inside a module, use e.g.
loggerA = logging.getLogger(__name__ + '.A')
loggerB = logging.getLogger(__name__ + '.B')
or
def main():
import logging.config
logging.config.fileConfig('/path/to/logging.conf')
# your program code
if __name__ == '__main__':
main()
using basicConfig:
# package / __main__.py import logging import sys logging.basicConfig(stream = sys.stdout, level = logging.INFO)
using fileConfig:
# package / __main__.py import logging import logging.config logging.config.fileConfig('logging.conf')
and then create every logger using:
# package / submodule.py # or # package / subpackage / submodule.py import logging log = logging.getLogger(__name__) log.info("Hello logging!")
base_logger.py
import logging
logger = logging
logger.basicConfig(format = '%(asctime)s - %(message)s', level = logging.INFO)
Other files
from base_logger
import logger
if __name__ == '__main__':
logger.info("This is an info message")
Use a single python file to config my log as singleton pattern which named 'log_conf.py
'
# - * -coding: utf - 8 - * -
import logging.config
def singleton(cls):
instances = {}
def get_instance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return get_instance()
@singleton
class Logger():
def __init__(self):
logging.config.fileConfig('logging.conf')
self.logr = logging.getLogger('root')
In another module, just import the config.
from log_conf
import Logger
Logger.logr.info("Hello World")
Several of these answers suggest that at the top of a module you you do
import logging
logger = logging.getLogger(__name__)
It is my understanding that this is considered very bad practice. The reason is that the file config will disable all existing loggers by default. E.g.
#my_module
import logging
logger = logging.getLogger(__name__)
def foo():
logger.info('Hi, foo')
class Bar(object):
def bar(self):
logger.info('Hi, bar')
And in your main module :
#main import logging # load my module - this now configures the logger import my_module # This will now disable the logger in my module by default, [see the docs][1] logging.config.fileConfig('logging.ini') my_module.foo() bar = my_module.Bar() bar.bar()
In my module's init.py I have something like:
# mymodule / __init__.py
import logging
def get_module_logger(mod_name):
logger = logging.getLogger(mod_name)
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
Then in each module I need a logger, I do:
# mymodule / foo.py from[modname] import get_module_logger logger = get_module_logger(__name__)
Logging is a module in the Python standard library that provides a richly-formatted log with a flexible filter and the possibility to redirect logs to other sources such as syslog or email.,The Python standard library comes with a logging module that provides most of the basic logging features. By setting it up correctly, a log message can bring a lot of useful information about when and where the log is fired as well as the log context, such as the running process/thread.,Logger is probably the one that will be used directly the most often in the code and which is also the most complicated. A new logger can be obtained by:,As applications become more complex, having good logs can be very useful, not only when debugging but also to provide insight in application issue/performance. The Python standard library comes with a logging module that provides most of the basic logging features and is very handy but contains some quirks that can cause hours of headaches
Please note that all code snippets in the post suppose that you have already imported the logging module:
import logging
For example, when a log “hello world” is sent through a log formatter:
"%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s"
it will become
2018-02-07 19:47:41,864 - a.b.c - WARNING - <module>:1 - hello world
Logger is probably the one that will be used directly the most often in the code and which is also the most complicated. A new logger can be obtained by:
toto_logger = logging.getLogger("toto")
A logger is unique by name, meaning that if a logger with the name “toto” has been created, the consequent calls of logging.getLogger("toto")
will return the same object:
assert id(logging.getLogger("toto")) == id(logging.getLogger("toto"))
That is all there is to it. In Python, __name__ contains the full name of the current module, so this will simply work in any module.,Python comes with a logging module in the standard library that provides a flexible framework for emitting log messages from Python programs. This module is widely used by libraries and is the first go-to point for most developers when it comes to logging.,This article covers the basics of using the standard logging module that ships with all Python distributions. After reading this, you should be able to easily integrate logging into your Python application.,Applications should configure logging as early as possible, preferably as the first thing in the application, so that log messages do not get lost during startup.
To emit a log message, a caller first requests a named logger. The name can be used by the application to configure different rules for different loggers. This logger then can be used to emit simply-formatted messages at different log levels (DEBUG, INFO, ERROR, etc.), which again can be used by the application to handle messages of higher priority different than those of a lower priority. While it might sound complicated, it can be as simple as this:
import logging
log = logging.getLogger("my-logger")
log.info("Hello, world")
The only responsibility modules have is to make it easy for the application to route their log messages. For this reason, it is a convention for each module to simply use a logger named like the module itself. This makes it easy for the application to route different modules differently, while also keeping log code in the module simple. The module just needs two lines to set up logging, and then use the named logger:
import logging
log = logging.getLogger(__name__)
def do_something():
log.debug("Doing something!")
In the case of running Python in containers like Docker, logging to standard output is also often the easiest move as this output can be directly and easily managed by the container itself.
import logging
import os
logging.basicConfig(level = os.environ.get("LOGLEVEL", "INFO"))
exit(main())
The alternative is to send it directly to syslog. This is great for older operating systems that don’t have systemd. In an ideal world this should be simple, but sadly, Python requires a bit more elaborate configuration to be able to send unicode log messages. Here is a sample implementation.
import logging
import logging.handlers
import os
class SyslogBOMFormatter(logging.Formatter):
def format(self, record):
result = super().format(record)
return "ufeff" + result
handler = logging.handlers.SysLogHandler('/dev/log')
formatter = SyslogBOMFormatter(logging.BASIC_FORMAT)
handler.setFormatter(formatter)
root = logging.getLogger()
root.setLevel(os.environ.get("LOGLEVEL", "INFO"))
root.addHandler(handler)
try:
exit(main())
except Exception:
logging.exception("Exception in main()")
exit(1)
Here’s a sample implementation.
import logging
import logging.handlers
import os
handler = logging.handlers.WatchedFileHandler(
os.environ.get("LOGFILE", "/var/log/yourapp.log"))
formatter = logging.Formatter(logging.BASIC_FORMAT)
handler.setFormatter(formatter)
root = logging.getLogger()
root.setLevel(os.environ.get("LOGLEVEL", "INFO"))
root.addHandler(handler)
try:
exit(main())
except Exception:
logging.exception("Exception in main()")
exit(1)
March 3, 2019
Then, instead of print()
, you call logging.{level}(message)
to show the message in console.
import logging logging.basicConfig(level = logging.INFO) def hypotenuse(a, b): "" "Compute the hypotenuse" "" return (a ** 2 + b ** 2) ** 0.5 logging.info("Hypotenuse of {a}, {b} is {c}".format(a = 3, b = 4, c = hypotenuse(a, b))) # > INFO: root: Hypotenuse of 3, 4 is 5.0
Had I set the level as logging.ERROR
instead, only message from logging.error
and logging.critical
will be logged. Clear?
import logging logging.basicConfig(level = logging.WARNING) def hypotenuse(a, b): "" "Compute the hypotenuse" "" return (a ** 2 + b ** 2) ** 0.5 kwargs = { 'a': 3, 'b': 4, 'c': hypotenuse(3, 4) } logging.debug("a = {a}, b = {b}".format( ** kwargs)) logging.info("Hypotenuse of {a}, {b} is {c}".format( ** kwargs)) logging.warning("a={a} and b={b} are equal".format( ** kwargs)) logging.error("a={a} and b={b} cannot be negative".format( ** kwargs)) logging.critical("Hypotenuse of {a}, {b} is {c}".format( ** kwargs)) # > WARNING: root: a = 3 and b = 3 are equal # > ERROR: root: a = -1 and b = 4 cannot be negative # > CRITICAL: root: Hypotenuse of a, b is 5.0
To send the log messages to a file from the root logger, you need to set the file argument in logging.basicConfig()
import logging
logging.basicConfig(level = logging.INFO, file = 'sample.log')
Let’s look at the below code:
# 1. code inside myprojectmodule.py import logging logging.basicConfig(file = 'module.log') #-- -- -- -- -- -- -- -- -- -- -- -- -- -- - # 2. code inside main.py(imports the code from myprojectmodule.py) import logging import myprojectmodule # This runs the code in myprojectmodule.py logging.basicConfig(file = 'main.log') # No effect, because!
While you can give pretty much any name to the logger, the convention is to use the __name__
variable like this:
logger = logging.getLogger(__name__)
logger.info('my logging message')
import logging
logging.basicConfig(level = logging.INFO, format = '%(asctime)s :: %(levelname)s :: Module %(module)s :: Line No %(lineno)s :: %(message)s')
logging.info("This is root logger's logging mesage!")
Streamline your Python log collection and analysis with Datadog.,Unify all your Python logsLog in JSON formatAdd custom attributes to your JSON logsCorrelate logs with other sources of monitoring data,In the next section, we’ll show you how easy it is to customize basicConfig() to log lower-priority messages and direct them to a file on disk.,Thanks to the new basicConfig() configuration, DEBUG-level logs are no longer being filtered out, and logs follow a custom format that includes the following attributes:
import logging
def word_count(myfile):
logging.basicConfig(level = logging.DEBUG, filename = 'myapp.log', format = '%(asctime)s %(levelname)s:%(message)s')
try:
# count the number of words in a file and log the result
with open(myfile, 'r') as f:
file_data = f.read()
words = file_data.split(" ")
num_words = len(words)
logging.debug("this file has %d words", num_words)
return num_words
except OSError as e:
logging.error("error reading the file")[...]
2019 - 03 - 27 10: 49: 00, 979 DEBUG: this file has 44 words
2019 - 03 - 27 10: 49: 00, 979 ERROR: error reading the file
logger = logging.getLogger(__name__)
# lowermodule.py import logging logging.basicConfig(level = logging.DEBUG, format = '%(asctime)s %(name)s %(levelname)s:%(message)s') logger = logging.getLogger(__name__) def word_count(myfile): try: with open(myfile, 'r') as f: file_data = f.read() words = file_data.split(" ") final_word_count = len(words) logger.info("this file has %d words", final_word_count) return final_word_count except OSError as e: logger.error("error reading the file")[...] # uppermodule.py import logging import lowermodule logging.basicConfig(level = logging.DEBUG, format = '%(asctime)s %(name)s %(levelname)s:%(message)s') logger = logging.getLogger(__name__) def record_word_count(myfile): logger.info("starting the function") try: word_count = lowermodule.word_count(myfile) with open('wordcountarchive.csv', 'a') as file: row = str(myfile) + ',' + str(word_count) file.write(row + '\n') except: logger.warning("could not write file %s to destination", myfile) finally: logger.debug("the function is done for the file %s", myfile)
2019 - 03 - 27 21: 16: 41, 200 __main__ INFO: starting the
function
2019 - 03 - 27 21: 16: 41, 200 lowermodule INFO: this file has 44 words
2019 - 03 - 27 21: 16: 41, 201 __main__ DEBUG: the
function is done
for the file myfile.txt
2019 - 03 - 27 21: 16: 41, 201 __main__ INFO: starting the
function
2019 - 03 - 27 21: 16: 41, 202 lowermodule ERROR: [Errno 2] No such file or directory: 'nonexistentfile.txt'
2019 - 03 - 27 21: 16: 41, 202 __main__ DEBUG: the
function is done
for the file nonexistentfile.txt
[loggers]
keys = root
[handlers]
keys = fileHandler
[formatters]
keys = simpleFormatter
[logger_root]
level = DEBUG
handlers = fileHandler
[handler_fileHandler]
class = FileHandler
level = DEBUG
formatter = simpleFormatter
args = ("/path/to/log/file.log", )
[formatter_simpleFormatter]
format = % (asctime) s % (name) s - % (levelname) s: % (message) s