-
-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Event after closing file (close_write) #184
Comments
Indeed, detecting IN_CLOSE_WRITE events is much more reliable and should be the default solution. |
@k4nar If you've found a bug please make a new issue for it. |
I don't think it's a bug, but it seems that sometimes I get several events for a single write in a file. Using Pyinotify with an event handler on IN_CLOSE_WRITE, this does not happen. I'm relying on the "one event per write" scenario as I want to ignore the next event occurring after a write on a watched file. |
You sure you're writing only once? It should be exactly one event per flush so if you only write once it should be only one modify event. |
Yes I'm sure of this. My code is the following : with open(filename, mode) as f:
f.seek(offset)
f.write(chunk) Maybe Python's EDIT : It does not seems to be a double-fwrite issue as I have the same behavior with 1-byte-long chunks. |
Please make a small runnable example. I'm not able to reproduce. |
Alternatively you could verify that the problem really is with watchdog by using |
I made an example. I think you might have to tweak the sleep value according to your system's performances (I have two Raid-0 SSDs). import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class EventHandler(FileSystemEventHandler):
events = 0
def on_modified(self, event):
self.events += 1
print "e", self.events
observer = Observer()
observer.schedule(EventHandler(), path='.', recursive=True)
observer.start()
writes = 0
chunk = "Test 12345\n"
offset = 0
for i in xrange(3000):
# The trick is to tweak this value.
# On my system, with a low value (0.001), I have more writes than events,
# which is fine (I think it's due to Inotify's coalescing)
# But with a larger value (0.005 - 0.01), I get more events than writes. Not a lot,
# around ten. (in my real-world app this was more around 100 for a few hundreds of writes)
time.sleep(0.01)
# This corresponds to my "write_chunk" function, which is called
# by another thread on a remote event (zmq)
with open('test', 'rb+') as f:
f.seek(offset)
f.write(chunk)
writes += 1
print "w", writes
offset += len(chunk) |
Ok, I can confirm. And looking at the code the reason is obvious: it fires modify events on IN_CLOSE_WRITE. The reason you got only 1 event 99.99% of the time is the |
Ok, I'll try with this version, thanks. Do you know if the behavior will be different on another OS ? |
It's a lot better, but I still get one or two extra event times to times. |
Woks fine here. Make sure you add |
@tamland : Do you know if there is a solution to write something in a watched file without being notified, without a race condition and without stopping the whole watch ? |
A question for stackoverflow perhaps? Pretty sure you can't though. It's the whole point of inotify. |
For the moment I got around it using the mtime. |
Finally implemented with 2fab7c2 (will be part of the 1.0.3 version). |
Is there a way to get the an event after a file is closed similar to pyinotify's IN_CLOSE_WRITE event? I'm trying to monitor very large files and I really only need the write close event rather than the "Modified file" spam.
The text was updated successfully, but these errors were encountered: