passing file instance as an argument to the celery task raises "valueerror: i/o operation on closed file"

  • Last Update :
  • Techknowledgy :

views:

from engine.tasks
import s3_upload_handler
def myfunc():
   f = open('/app/uploads/pic.jpg', 'rb')
s3_file_handler.apply_async(kwargs = {
   "uploaded_file": f,
   "file_name": "test.jpg"
})

tasks:

def s3_upload_handler(uploaded_file, file_name):
   ...
   #some code
for uploading to s3

traceback:

Traceback(most recent call last):
   File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun( * args, ** kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 437, in __protected_call__
return self.run( * args, ** kwargs)
File "/app/photohosting/engine/tasks.py", line 34, in s3_upload_handler
key.set_contents_from_file(uploaded_file)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1217, in set_contents_from_file
spos = fp.tell()
ValueError: I / O operation on closed file

Suggestion : 2

Combining these techniques, we can easily construct pathnames for directories and files in the user's home directory. The os.path.join() function can take any number of arguments. ,The os.path.join() function constructs a pathname out of one or more partial pathnames. In this case, it simply concatenates strings. Calling the os.path.join() function will add an extra slash to the pathname before joining it to the filename. ,The module called os contains functions to get information on local directories, files, processes, and environment variables.,os.path also contains functions to split full pathnames, directory names, and filenames into their constituent parts.

The current working directory is a property that Python holds in memory at all times. There is always a current working directory, whether we're in the Python Shell, running our own Python script from the command line, etc.

>>>
import os
   >>>
   print(os.getcwd())
C: \Python32 >>>
   os.chdir('/test') >>>
   print(os.getcwd())
C: \test

os.path contains functions for manipulating filenames and directory names.

>>>
import os
   >>>
   print(os.path.join('/test/', 'myfile')) /
   test / myfile >>>
   print(os.path.expanduser('~'))
C: \Users\ K >>>
   print(os.path.join(os.path.expanduser('~'), 'dir', 'subdir', 'k.py'))
C: \Users\ K\ dir\ subdir\ k.py

Note: we need to be careful about the string when we use os.path.join. If we use "/", it tells Python that we're using absolute path, and it overrides the path before it:

>>>
import os
   >>>
   print(os.path.join('/test/', '/myfile')) /
   myfile

The glob module is another tool in the Python standard library. It's an easy way to get the contents of a directory programmatically, and it uses the sort of wildcards that we may already be familiar with from working on the command line.

>>>
import glob
   >>>
   os.chdir('/test') >>>
   import glob >>>
   glob.glob('subdir/*.py')['subdir\\tes3.py', 'subdir\\test1.py', 'subdir\\test2.py']

Every file system stores metadata about each file: creation date, last-modified date, file size, and so on. Python provides a single API to access this metadata. We don't need to open the file and all we need is the filename.

>>>
import os
   >>>
   print(os.getcwd())
C: \test >>>
   os.chdir('subdir') >>>
   print(os.getcwd())
C: \test\ subdir >>>
   metadata = os.stat('test1.py') >>>
   metadata.st_mtime
1359868355.9555483
   >>>
   import time >>>
   time.localtime(metadata.st_mtime)
time.struct_time(tm_year = 2013, tm_mon = 2, tm_mday = 2, tm_hour = 21,
      tm_min = 12, tm_sec = 35, tm_wday = 5, tm_yday = 33, tm_isdst = 0) >>>
   metadata.st_size
1844

Suggestion : 3

1 week ago Jul 02, 2022  · There are three main common reasons why we face the ValueError: I/O operation on closed file. First, when you try, forget to indent the code in the with statement. Second, when you try to read/write to a file in a closed state. Third, when you close the file inside a for loop, accidentally. Let us look at an example where we can reproduce the ... , 1 week ago Apr 24, 2022  · The Python ValueError: I/O operation on closed file occurs when we try to perform an operation on a closed file. To solve the error, make sure to indent the code that tries to access the file correctly if using the `with open()` statement. , 1 week ago Mar 25, 2022  · ValueError: I/O operation on closed file. ValueError: I/O operation on closed file can occur in the following two cases. Let’s look at them one by one. Case 1: When you try to read or write a file when it has been closed. To elaborate, when you open a file through context manager, perform read or write operations, and close it. ,is thrown herein at the last line to write data to the "all" file. I think it is because I opened it before, but I do remember to close it after reading the data, don't I? Could somebody please let me know what the problem is?


ALL_USER_PATH = 'all.csv'
NEW_USER_PATH = 'new.csv'
1._
electron, -1, 0.511 muon, -1, 105.7 tau, -1, 1776.9
electron,-1, 0.511 muon,-1,105.7 tau,-1,1776.9
import csv particles = open("particles.csv", "r") read_file = csv.reader(particles) particles.close() for p in read_file: print(f 'Particle: {p[0]}, Charge: {p[1]}, Mass: {p[2]} MeV')
import csv  particles = open("particles.csv", "r")  read_file = csv.reader(particles)  particles.close()  for p in read_file:
print(f'Particle: {p[0]}, Charge: {p[1]}, Mass: {p[2]} MeV')
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - ValueError Traceback(most recent call last) 7 particles.close() 8-- --≻9
for p in read_file: 1011 print(f 'Particle: {p[0]}, Charge: {p[1]}, Mass: {p[2]} MeV') ValueError: I / O operation on closed file.
import csv  particles = open("particles.csv", "r")  read_file = csv.reader(particles)  for p in read_file:
print(f'Particle: {p[0]}, Charge: {p[1]}, Mass: {p[2]} MeV') particles.close()
Particle: electron, Charge: -1, Mass: 0.511 MeV Particle: muon, Charge: -1, Mass: 105.7 MeV Particle: tau, Charge: -1, Mass: 1776.9 MeV
Particle: electron, Charge: -1, Mass:  0.511 MeV Particle: muon, Charge: -1, Mass: 105.7 MeV Particle: tau, Charge: -1, Mass: 1776.9 MeV
import csv with open("particles.csv", "r") as particles: read_file = csv.reader(particles) for p in read_file: print(f 'Particle: {p[0]}, Charge: {p[1]}, Mass: {p[2]} MeV')

Suggestion : 4

Example JSONDecodeError: Expecting value: line 1 column 1 (char 0),JSONDecodeError: Expecting value: line 1 column 1 (char 0),Home » Python » JSONDecodeError: Expecting value: line 1 column 1 (char 0),In most of cases, you get json.loads- JSONDecodeError: Expecting value: line 1 column 1 (char 0) error is due to :

1._
import json

file_path = "C:/Projects/Tryouts/books.json"

with open(file_path, 'r') as j:
   contents = json.loads(j.read())
print(contents)

Output

Traceback (most recent call last):
File "c:/Projects/Tryouts/main.py", line 6, in <module>
   contents = json.loads(j.read())
   File "C:\Users\abc\AppData\Local\Programs\Python\Python37\lib\json\__init__.py", line 348, in loads
   return _default_decoder.decode(s)
   File "C:\Users\abc\AppData\Local\Programs\Python\Python37\lib\json\decoder.py", line 337, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   File "C:\Users\abc\AppData\Local\Programs\Python\Python37\lib\json\decoder.py", line 355, in raw_decode
   raise JSONDecodeError("Expecting value", s, err.value) from None
   json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
3._
# Python program to solve JSONDecodeError: Expecting value: line 1 column 1(char 0)
import json

file_path = "C:/Projects/Tryouts/books.json"

with open(file_path, 'r') as j:
   contents = json.loads(j.read())
print(contents)
6._
json_file_path = "/path/to/example.json"

contents = json.loads(json_file_path)

Good Practice

json_file_path = "/path/to/example.json"

with open(json_file_path, 'r') as j:
   contents = json.loads(j.read())