python not closing file descriptors

  • Last Update :
  • Techknowledgy :

It seems that extra com.open is causing the issue. According to the docs serial.Serial returns it open, so you don't need to open it once more. In Linux (all POSIXes as far as I know) open just increments the counter and close just decrements it. There is a deleted answer here by @wheaties advising using with which I would recommend as well:

with serial.Serial(port_name, 9600, ...) as com:
   com.flushInput()
com.flushOutput()
   ...

Use finally to ensure the port is closed.

try:
...
except serial.SerialException:
   ...
   finally:
   com.close()

Suggestion : 2

This issue tracker has been migrated to GitHub, and is currently read-only. For more information, see the GitHub FAQs in the Python's Developer Guide.

> Using close_fds = False, subprocess can use posix_spawn() which is safer and faster than fork + exec.For example, on Linux, the glibc implements it as a
function using vfork which is faster than fork
if the parent allocated a lot of memory.On macOS, posix_spawn() is even a syscall.

On Linux, unless you care specifically about users with Python 3.10 + on older kernels, implementing support
for closerange() syscall in subprocess would provide better net benefit.This is because(a) performance advantage of posix_spawn() is no longer relevant on Linux after bpo - 35823 and(b) supporting closerange() would benefit even those users who still need close_fds = True.
Note that vfork() support has been merged
for 3.10 via bpo - 35823, so posix_spawn() is less of a performance carrot than it used to be on Linux.vfork() exists macOS, that code could likely be enabled there after some investigation + testing.

Regardless, changing this
default sounds difficult due to the variety of things depending on the existing behavior - potentially
for security issues as you 've noted - when running in a process with other file descriptors potentially not managed by Python (ie: extension modules) that don'
t explicitly use CLOEXEC.

The subprocess APIs are effectively evolving to become lower level over time as we continually find warts in them that need addressing but find defaults that cannot change due to existing uses.A higher level "best practices for launching child processes module"
with APIs reflecting explicit intents(performance vs security vs simplicity) rather than requiring users to understand subprocess platform specific details may be a good idea at this point(on PyPI I assume).

We changed posix close_fds
default to True in 3.2 when Jeffrey and I wrote _posixsubprocess to better match the behavior most users actually want - undoing that doesn 't feel right.
> Python 3.7 added the support
for the PROC_THREAD_ATTRIBUTE_HANDLE_LIST
   >
   in subprocess.STARTUPINFO: lpAttributeList['handle_list'] parameter.

The motivating reason to add support
for the WinAPI handle list was to allow changing the
default to close_fds = True regardless of the need to inherit standard handles.However, even when using the handle list, one still has to make each handle in the list inheritable.Thus concurrent calls to os.system() and os.spawn * () --which are not implemented via subprocess.Popen(), but should be--may leak the handles in the list.If the
default is changed to close_fds = False, then by
default concurrent Popen() calls may also leak temporarily inheritable handles when a handle list isn 't used to constrain inheritance.

Background

Windows implicitly duplicates standard I / O handles from a parent process to a child process
if they 're both console applications and the child inherits the console session. However, subprocess.Popen() requires standard I/O inheritance to work consistently even without an inherited console session. It explicitly inherits standard handles in the STARTUPINFO record, which requires CreateProcessW to be called with bInheritHandles as TRUE. In 3.7+, Popen() also passes the standard-handle values in a PROC_THREAD_ATTRIBUTE_HANDLE_LIST attribute that constrains inheritance, but handles in the list still have to be made inheritable before calling CreateProcessW. Thus they may be leaked by concurrent CreateProcessW calls that inherit handles without a constraining handle list.
I 'd like to provide another, non-performance-related use case for changing the default value of Popen'
s close_fds parameters back to False.

In some scenarios, a(non - Python) parent process may want its descendant processes to inherit a particular file descriptor and
for each descendant process to pass on that file descriptor its own children.In this scenario, a Python program may just be an intermediate script that calls out to multiple subprocesses, and closing the inheritable file descriptors by
default would interfere with the parent process 's ability to pass on that file descriptor to descendants.

As a concrete example, we have a(non - Python) build system and task runner that orchestrates many tasks to run in parallel.Some of those tasks end up invoking Python scripts that use subprocess.run() to run other programs.Our task runner intentionally passes an inheritable file descriptor that is unique to each task as a form of a keep - alive token;
if the child processes
continue to pass inheritable file descriptors to their children, then we can determine whether all of the processes spawned from a task have terminated by checking whither the last open handle to that file descriptor has been closed.This is particularly important when a processes exits before its children, sometimes uncleanly due to being force killed by the system or by a user.

In our use
case, Python 's default value of close_fds=True interferes with our tracking scheme, since it prevents Python'
s subprocesses from inheriting that file descriptor, even though that file descriptor has intentionally been made inheritable.

While we are able to work around the issue by explicitly setting close_fds = False in as much of our Python code as possible, it 's difficult to enforce this globally since we have many small Python scripts. We also have no control over any third party libraries that may possibly call Popen.

Regarding security, PEP 446 already makes it so that any files opened from within a Python program are non - inheritable by
default, which I agree is a good
default.One can make the argument that it 's not Python'
s job to enforce a security policy on file descriptors that a Python process has inherited from a parent process, since Python cannot distinguish from descriptors that were accidentally or intentionally inherited.

Suggestion : 3

Note: Refer here to read more about how to close a range of file descriptors.,The close method of the os module is used to close the given file descriptor.,Every file has a non-negative integer associated with it. This non-negative integer is called the file descriptor for that particular file. The file descriptors are allocated in sequential order with the lowest possible unallocated positive integer value taking precedence.,Line 8: We close the file descriptor using the os.close() method.

Syntax

os.close(fd)
import os

f_name = "file.txt"
fileObject = open(f_name, "r")
fd = fileObject.fileno()
print("The file descriptor for %s is %s" % (f_name, fd))

os.close(fd)

Suggestion : 4

It is probably unwise to close file descriptors while they may be in use by system calls in other threads in the same process. Since a file descriptor may be reused, there are some obscure race conditions that may cause unintended side effects. ,If fd is the last file descriptor referring to the underlying open file description (see open(2)), the resources associated with the open file description are freed; if the descriptor was the last reference to a file which has been removed using unlink(2) the file is deleted. ,A successful close does not guarantee that the data has been successfully saved to disk, as the kernel defers writes. It is not common for a file system to flush the buffers when the stream is closed. If you need to be sure that the data is physically stored use fsync(2). (It will depend on the disk hardware at this point.)

Synopsis

#include <unistd.h>
   int close(int fd);

Suggestion : 5

After a few thousand files we had so many files open that the operating system refused to let us open any more. When that happened the worker was practically useless.,For a file that you're writing to, if you neglect to close it prior to terminating the process, any output still remaining in the buffers might not be written, leading to data loss and/or file corruption. That's a potentially much more serious problem.,But for a script that runs for a long time, like a server, it can be very serious because file descriptors are a limited resource and a process can run out of them if it neglects to close them.,...you never have to worry about remembering to close a file ever again. At the end of that block, it does it for you.

If you use

with open("filename.txt") as f:

Suggestion : 6

2020-07-05

The preferred way to manage files is using the "with" statement:

with open("hello.txt") as hello_file:
   for line in hello_file:
   print line
with open("README.md") as readme:
   long_description = readme.read()
import csv
import json

def load(filepath):
   ""
"the supposedly bad way"
""
return json.load(open(filepath))

def load(filepath):
   ""
"the supposedly good way"
""
with open(filepath) as file:
   return json.load(file)

def load(filepath):
   ""
"with a different file format"
""
with open(filepath) as file:
   return csv.reader(file)
def load(filepath):
   with open(filepath) as file:
   yield from csv.reader(file)
def load(filepath):
   with open(filepath) as file:
   contents = file.read()
return json.loads(contents)
response = requests.get(...)

with requests.Session() as session:
   response = session.get(...)

Suggestion : 7

not exactly... close_file_descriptor_if_open always closes file descriptor and catch an exception. if fd is not open os.close raises an exception and it's time consuming. My proposition was to get list of opened file descriptors instead of iterating over all of them. For linux you can just check: os.listdir('/proc/' + str(os.getpid()) + '/fd') but of course the fallback is need because some environments could have not mounted /proc or not accessible due to some admin restrictions.,Yes, that is exactly what daemon.close_file_descriptor_if_open is intended to do. Because a file descriptor is merely an integer, there does not appear to be a more efficient way to do this.,It's common to have exclude_fds non-empty, for example a log stream for the daemon will need its file to remain open. The iteration to close file descriptors must still skip the items in exclude_fds.,and attempt to close each file descriptor within python (most likely receiving exception EBADF - Bad file descriptor).

$ python - c 'import resource; print(resource.getrlimit(resource.RLIMIT_NOFILE)[1])'
1048576
import os

closerange = os.closerange

def close_all_open_files(exclude = None):
   min_fd = 3
max_fd = get_maximum_file_descriptors()

if not exclude:
   closerange(min_fd, max_fd)
return

for ex_fd in sorted(exclude):
   if ex_fd < min_fd:
   continue
if ex_fd == min_fd:
   min_fd = ex_fd + 1
continue
closerange(min_fd, ex_fd)
min_fd = ex_fd + 1

if min_fd and min_fd < max_fd:
   closerange(min_fd, max_fd)