Configure all threads as daemons, e.g.:
thread = threading.Thread(target = thread_msg, args = args, kwargs = kwargs) thread.daemon = True thread.start()
You can use
from signal import pthread_kill, SIGTSTP from threading import Thread from itertools import count from time import sleep def target(): for num in count(): print(num) sleep(1) thread = Thread(target = target) thread.start() sleep(5) pthread_kill(thread.ident, SIGTSTP)
0 1 2 3 4  + Stopped
Here is another example of a simple HTTP server that shows how to handle signal SIGINT and SIGTERM. The server shuts down orderly when the main thread receives a signal, which causes an exception to be raised. The except clause calls cancel() that calls shutdown() and server_close().,It does not work when there are many threads at the same time,Thanks for sharing your code. I've had trouble stopping threads using Ctrl-C and clearing up processes. This code works perfect for me.
Here is another example of a simple HTTP server that shows how to handle signal SIGINT and SIGTERM. The server shuts down orderly when the main thread receives a signal, which causes an exception to be raised. The except clause calls cancel() that calls shutdown() and server_close().
import SocketServer as socketserver import os import threading, signal, time class MyHandler(SimpleHTTPServer.SimpleHTTPRequestHandler): path_to_image = 'plot.png' img = open(path_to_image, 'rb') statinfo = os.stat(path_to_image) img_size = statinfo.st_size print(img_size) def do_HEAD(self): self.send_response(200) self.send_header("Content-type", "image/png") self.send_header("Content-length", img_size) self.end_headers() def do_GET(self): self.send_response(200) self.send_header("Content-type", "image/png") self.send_header("Content-length", img_size) self.end_headers() f = open(path_to_image, 'rb') self.wfile.write(f.read()) f.close() class MyServer(socketserver.ThreadingMixIn, socketserver.TCPServer): def __init__(self, server_adress, RequestHandlerClass): self.allow_reuse_address = True socketserver.TCPServer.__init__(self, server_adress, RequestHandlerClass, False) class WorkerThread(threading.Thread): def __init__(self, target, sh, cl): threading.Thread.__init__(self) self.handler = target self.shut = sh self.close = cl def run(self): print 'Running thread', threading.currentThread().getName() self.handler() def cancel(self): print 'Cancel called' self.shut() self.close() class ServerExit(Exception): pass def service_shutdown(signum, frame): print('Caught signal %d' % signum) raise ServerExit if __name__ == "__main__": signal.signal(signal.SIGTERM, service_shutdown) signal.signal(signal.SIGINT, service_shutdown) server = MyServer(("localhost", 7777), MyHandler) server.server_bind() server.server_activate() server_thread = WorkerThread(server.serve_forever, server.shutdown, server.server_close) try: server_thread.start() while True: time.sleep(1.0) except ServerExit: print('Caught exception ') server_thread.cancel()`` `
This issue tracker has been migrated to GitHub, and is currently read-only. For more information, see the GitHub FAQs in the Python's Developer Guide.
I 'm experiencing that the following short program: import threading event = threading.Event() event.wait() Cannot be interrupted with Ctrl - C on Python 2.7 .15 or 3.7 .1 on Windows 10(using the Anaconda Python distribution). However, if the wait is given a timeout: import threading event = threading.Event() while True: if event.wait(10000): break then this is interruptable on Python 2.7 .15, but is still uninterruptible on Python 3.7 .1.
If I add: import signal signal.signal(signal.SIGINT, signal.SIG_DFL) before the wait() call, then the call is interruptible on both Python versions without needing to add a timeout.
I 'm not sure it' s quite as simple as calling sys.exit, but it would be a great project to bring universal cancellation support to all(regularly) blocking functions.Asyncio has suffered from this as well. Part of the problem is that POSIX APIs often don 't support cancellation, and so things have been designed in ways that prevent use of Windows' s cancellation support(via APCs or kernel events).Given that we would have to emulate a lot of things on all platforms to make it consistent, this is certainly a PEP and long - term project.(And probably a lot of arguments with people who don 't like new abstractions :( ) But on this particular issue, making the unconditional wait be interruptable by signals shouldn 't be impossible. It' s been done elsewhere, so probably just this one got missed.
@ericsun: Windows calls the console event handler in a separate thread.The console event handler receives CTRL_C_EVENT, CTRL_BREAK_EVENT, console close, logoff, system shutdown events. Originally, Windows devised an APC mechanism to simulate asynchronous delivery of Posix signal to threads.Those APCs are invoked during alertable wait functions.Delivery of an APS also aborts the wait with WAIT_IO_COMPLETION return code. An APC can be queued by QueueUserAPC function. An APC queue can be processed at any time by calling an alertable wait function with zero timeout, for example SleepEx(0, TRUE). If you need an APC to break wait for asynchronous input(like console or serial port), use overlapped I / O with GetOverlappedResultEx function.To cancel the I / O request, use CancelIo function on the thread which issued the request.Note that you still need to wait for the cancelled request to complete the cancellation with GetOverlappedResult.
Wed 19 April 2017 , Posted by Nathaniel J. Smith Wed 19 April 2017
lock.acquire() try: do_stuff() # < - do_something_else() # < -control - C anywhere here is safe and_some_more() # < - finally: lock.release()
lock.acquire() # < -control - C could happen here try: ... finally: # < -or here lock.release()
The problem with KeyboardInterrupt is that it can happen anywhere. If we want to make this manageable, then we need to somehow trim down the number of places where we need to think about control-C. The general strategy here is to register a custom handler for SIGINT that does nothing except set some kind of flag to record that the signal happened. This way we can be pretty confident that the signal handler itself won't interfere with whatever the program was doing when the signal handler ran. And then we have to make sure that our program checks this flag on a regular basis at places where we know how to safely clean up and exit. The best way to think about this is that we set up a "chain of custody" where responsibility for handling the signal gets handed along from tricky low-level code up to higher-level code whose execution context is better-defined:
custom signal handler - > our program 's main loop sets flag checks flag
It's hard to say more than this, though, because the implementation is going to depend a lot on the way each particular program is put together. That's the downside to this approach: making it work at all requires insight into our program's structure and careful attention to detail. If we mess up and don't check the flag for a few seconds (perhaps because we're busy doing something else, or the program is sleeping while waiting for I/O to arrive, or ...), then oops, it takes a few seconds to respond to control-C. To avoid this we may need to invent some kind of mechanism to not just set the flag, but also prod the main loop into checking it in a timely fashion:
custom signal handler - > our program 's main loop sets flag gets woken up by being poked with stick & pokes main loop with a stick & checks flag
For example, a simple web server might have a task tree that looks like:
parent task supervising the other tasks│├─ task listening for new connections on port 80│├─ task talking to client 1│├─ task talking to client 2│├─ task talking to client 3┊
next_task = run_queue.pop()
next_task = run_queue.pop()
# On Windows this ignores control - C, so be prepared to kill it somehow... import asyncio loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.sleep(99999))
This module provides mechanisms to use signal handlers in Python. Some general rules for working with signals and their handlers:,When a signal arrives during an I/O operation, it is possible that the I/O operation raises an exception after the signal handler returns. This is dependent on the underlying Unix system’s semantics regarding interrupted system calls.,This is another standard signal handler, which will simply ignore the given signal.,Although Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculations implemented purely in C (such as regular expression matches on large bodies of text) may be delayed for an arbitrary amount of time.
import signal, os def handler(signum, frame): print 'Signal handler called with signal', signum raise IOError("Couldn't open device!") # Set the signal handler and a 5 - second alarm signal.signal(signal.SIGALRM, handler) signal.alarm(5) # This open() may hang indefinitely fd = os.open('/dev/ttyS0', os.O_RDWR) signal.alarm(0) # Disable the alarm