multiple processors logging to same rotate file

  • Last Update :
  • Techknowledgy :

Your Django settings.py

LOGGING = {
   'version': 1,
   'disable_existing_loggers': False,
   'formatters': {
      'default': {
         'format': '%(asctime)s - %(process)s - %(levelname)s - %(name)s : %(message)s',
      },
   },
   'handlers': {
      'console': {
         'level': 'DEBUG',
         'class': 'logging.StreamHandler',
      },
   },
   'root': {
      'handlers': ['console'],
      'level': 'DEBUG',
   },
}

Somewhere in your code

log = logging.getLogger(__name__)
log.info("test log!")

Run uWSGI with some logging params

$ uwsgi--http: 9090--chdir = `pwd -P`--wsgi - file = wsgi.py\
   --daemonize = test.log\ # daemonize AND set log file
   --log - maxsize = 10000\ # a 10 k file rotate
   --workers = 4 # start 4 workers

In the same dir, after a while:

-rw - r-- -- - 1 big staff 1.0 K Oct 12 09: 56 test.log -
   rw - r-- -- - 1 big staff 11 K Oct 12 09: 55 test.log .1444636554

Alternatively, to handle rotating the files yourself, omit the --log-maxsize parameter and use a logrotate config file (/etc/logrotate.d/uwsgi-test-app):

/home/demo / test_django
/*log {
    rotate 10
    size 10k
    daily
    compress
    delaycompress
}

Snippet for rollover method (edit in logging.handlers.RotatingFileHandler's code)

def doRollover(self):
   self.stream.close()
if self.backupCount > 0:
   for i in range(self.backupCount - 1, 0, -1):
   sfn = "%s.%d" % (self.baseFilename, i)
dfn = "%s.%d" % (self.baseFilename, i + 1)
if os.path.exists(sfn):
   if os.path.exists(dfn):
   os.remove(dfn)
os.rename(sfn, dfn)
dfn = self.baseFilename + ".1"
if os.path.exists(dfn):
   os.remove(dfn)
# os.rename(self.baseFilename, dfn) # Intead of this
# Do this
shutil.copyfile(self.baseFilename, dfn)
open(self.baseFilename, 'w').close()
if self.encoding:
   self.stream = codecs.open(self.baseFilename, "w", self.encoding)
else:
   self.stream = open(self.baseFilename, "w")

Then you can create your logger like this:

logger = logging.getLogger(logfile_name)
logfile = '{}/{}.log'.format(logfile_folder, logfile_name)
handler = RotatingFileHandler(
   logfile, maxBytes = maxBytes, backupCount = 10
)
formatter = logging.Formatter(format, "%Y-%m-%d_%H:%M:%S")
formatter.converter = time.gmtime
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.isEnabledFor = lambda level: True
logger.propagate = 0

logger.warning("This is a log")

Suggestion : 2

Within my site, I use RotatingFileHandler to log special entries, but, when uwsgi running with multiple worker processors,today i find that, multiple log files are changing at the same time. For example, here is file snippet:,Alternatively, to handle rotating the files yourself, omit the --log-maxsize parameter and use a logrotate config file (/etc/logrotate.d/uwsgi-test-app):,Or anyone has better idea to log multiple uwsig instances to same rotated file ?,Just for the heck of it, here is a complete solution example which uses python StreamHandler, uWSGI "daemonized file logging", and logrotate daemon to log to file with rotation.

Within my site, I use RotatingFileHandler to log special entries, but, when uwsgi running with multiple worker processors,today i find that, multiple log files are changing at the same time. For example, here is file snippet:

[
   [email protected] logs
] # ls - lth
total 18 M
   -
   rw - rw - rw - 1 root root 2.1 M Sep 14 19: 44 backend.log .7 -
   rw - rw - rw - 1 root root 1.3 M Sep 14 19: 43 backend.log .6 -
   rw - rw - rw - 1 root root 738 K Sep 14 19: 43 backend.log .3 -
   rw - rw - rw - 1 root root 554 K Sep 14 19: 43 backend.log .1 -
   rw - rw - rw - 1 root root 1013 K Sep 14 19: 42 backend.log .4 -
   rw - rw - rw - 1 root root 837 K Sep 14 19: 41 backend.log .5 -
   rw - rw - rw - 1 root root 650 K Sep 14 19: 40 backend.log .2 -
   rw - rw - rw - 1 root root 656 K Sep 14 19: 40 backend.log -
   rw - r--r--1 root root 10 M Sep 13 10: 11 backend.log .8 -
   rw - r--r--1 root root 0 Aug 21 15: 53 general.log[[email protected] logs] #

Your Django settings.py

LOGGING = {
   'version': 1,
   'disable_existing_loggers': False,
   'formatters': {
      'default': {
         'format': '%(asctime)s - %(process)s - %(levelname)s - %(name)s : %(message)s',
      },
   },
   'handlers': {
      'console': {
         'level': 'DEBUG',
         'class': 'logging.StreamHandler',
      },
   },
   'root': {
      'handlers': ['console'],
      'level': 'DEBUG',
   },
}

Somewhere in your code

log = logging.getLogger(__name__)
log.info("test log!")

Run uWSGI with some logging params

$ uwsgi--http: 9090--chdir = `pwd -P`--wsgi - file = wsgi.py\
   --daemonize = test.log\ # daemonize AND set log file
   --log - maxsize = 10000\ # a 10 k file rotate
   --workers = 4 # start 4 workers

In the same dir, after a while:

-rw - r-- -- - 1 big staff 1.0 K Oct 12 09: 56 test.log -
   rw - r-- -- - 1 big staff 11 K Oct 12 09: 55 test.log .1444636554

Alternatively, to handle rotating the files yourself, omit the --log-maxsize parameter and use a logrotate config file (/etc/logrotate.d/uwsgi-test-app):

/home/demo / test_django
/*log {
    rotate 10
    size 10k
    daily
    compress
    delaycompress
}

Suggestion : 3

The RotatingFileHandler class, located in the logging.handlers module, supports rotation of disk log files.,The TimedRotatingFileHandler class, located in the logging.handlers module, supports rotation of disk log files at certain timed intervals.,The DatagramHandler class, located in the logging.handlers module, inherits from SocketHandler to support sending logging messages over UDP sockets.,The SysLogHandler class, located in the logging.handlers module, supports sending logging messages to a remote or local Unix syslog.

data = pickle.dumps(record_attr_dict, 1)
datalen = struct.pack('>L', len(data))
return datalen + data

Suggestion : 4

It is possible that a more sophisticated, non "rotatelogs" piped logger could coordinate its writes across processes.,General questions about rotatelogs,It is possible that a more sophisticated, non "rotatelogs" piped logger could arrange to close the file earlier.,FAQ: rotatelogs and piped loggers

CustomLog "|grep -v \b200\b | /opt/IHS/bin/rotatelogs /opt/IHS/logs/grepped 86400"
common
CustomLog "|/usr/HTTPServer/bin/rotatelogs -l /www/logs/access_%Y-%m-%d 86400"
custom
date - u--date = @$(($((`date +%s` / 604800)) * 604800))

Suggestion : 5

Published: March 4, 2022

 # see "man logrotate"
 for details
 # rotate log files weekly
 weekly
 # use the syslog group by
 default, since this is the owning group
 # of /var/log / syslog.
 su root adm
 # keep 4 weeks worth of backlogs
 rotate 4
 # create new(empty) log files after rotating old ones
 create

    [...]
/var/log / apache2
/*.log {
        weekly
        maxsize 1G
}
/var/log / apache2
/* {
    [...]
    compress
    delaycompress
}
/var/log / apache2
/*.log {
    [...]
    sharedscripts
    postrotate
         rsync -avzr /var/log/apache2/*.log-* REMOTE-HOST:/path/to/directory/
    endscript
}
/var/log / mysql.log /
   var / log / mysql
/*log {
        [...]
        create 640 mysql adm
        [...]
}
setfacl - m u: dd - agent: rx /
   var / log / apache2

Suggestion : 6

This package provides an additional log handler for Python's standard logging package (PEP 282). This handler will write log events to a log file which is rotated when the log file reaches a certain size. Multiple processes can safely write to the same log file concurrently. Rotated logs can be gzipped if desired. Both Windows and POSIX systems are supported. An optional threaded queue logging handler is provided to perform logging in the background. , Concurrent Log Handler (CLH) is designed to allow multiple processes to write to the same logfile in a concurrent manner. It is important that each process involved MUST follow these requirements: , A separate handler instance is needed for each individual log file. For instance, if your app writes to two different logs you will need to set up two CLH instances per process. , A separate handler instance is needed for each individual log file. For instance, if your app writes to two different logs you will need to set up two CLH instances per process.

pip install concurrent - log - handler
python setup.py install
python setup.py clean--all bdist_wheel
# Copy the.whl file from under the "dist"
folder
from logging
import getLogger, INFO
from concurrent_log_handler
import ConcurrentRotatingFileHandler
import os

log = getLogger(__name__)
# Use an absolute path to prevent file rotation trouble.
logfile = os.path.abspath("mylogfile.log")
# Rotate log after reaching 512 K, keep 5 old copies.
rotateHandler = ConcurrentRotatingFileHandler(logfile, "a", 512 * 1024, 5)
log.addHandler(rotateHandler)
log.setLevel(INFO)

log.info("Here is a very exciting log message, just for you")
[handler_hand01]
class = handlers.ConcurrentRotatingFileHandler
level = NOTSET
formatter = form01
args = ("rotating.log", "a")
kwargs = {
   'backupCount': 5,
   'maxBytes': 1048576,
   'use_gzip': True
}
kwargs = {
   'newline': '',
   'terminator': '\r\n'
}