-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gmi push hangs sometimes #83
Comments
Do you get more output if you also redirect the stderr to your log?
Do you have a timeout set for gmail api callbacks? See gmi set. I don’t see
what else could hang without exception, maybe the notmuch dB - but that
should eventually unlock!
ons. 30. mai 2018 kl. 18:27 skrev Michael Turquette <
notifications@github.com>:
… I have a cron job (documented in #82
<#82> ) that calls gmi pull,
modifies some tags and then calls gmi push. For debug purposes I print
out date(1) in between each of those steps.
Now that the unicode character issue is resolved my cron job is almost
working reliably! I still get occasional hangs in gmi push.
Below is the log for when it failed yesterday. You can see after the first
date that we start to gmi pull and receive the "done" at the end of the
"receiving metadata" progress bar. The second date is an initial tagging
script that we can ignore. The third date shows the call to gmi push. We
fail to receive the "done" at the end o the metadata fetch, and then we get
a lock collision:
Tue May 29 13:00:03 PDT 2018
pull: partial synchronization.. (hid: 40921234)
fetching changes ...done: 26 its in 00.000s
resolving changes (26) .....done: 26 its in 00.000s
receiving content (21) ...remote: could not find remote message: 163ad76a8d4e57cb!
..done: 20 its in 02.145s
current historyId: 40921998
Tue May 29 13:00:16 PDT 2018
Tue May 29 13:14:12 PDT 2018
receiving metadata (4632) ............................................................................................................
Tue May 29 13:31:19 PDT 2018
Traceback (most recent call last):
File "/Users/mturquette/src/gmailieer/lieer/local.py", line 172, in load_repository
fcntl.lockf (self.lckf, fcntl.LOCK_EX | fcntl.LOCK_NB)
BlockingIOError: [Errno 35] Resource temporarily unavailable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/mturquette/src/gmailieer/gmi", line 8, in <module>
g.main ()
File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 149, in main
args.func (args)
File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 314, in pull
self.setup (args, args.dry_run, True)
File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 202, in setup
self.local.load_repository ()
File "/Users/mturquette/src/gmailieer/lieer/local.py", line 174, in load_repository
raise Local.RepositoryException ("failed to lock repository (probably in use by another gmi instance)")
lieer.local.RepositoryException: failed to lock repository (probably in use by another gmi instance)
Any thoughts on what is going wrong? After the above sequence occurs, the
push process hangs forever and must be manually killed. When this happens
in the evening I have hours worth of logs showing that the whole process
fails due to the lock collision. Each morning I invariably have to kill the gmi
push process manually. Usually something like:
$ ps -e | grep gmi; ps -e | grep notmuch
48742 ?? 0:18.09 /usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python /Users/mturquette/src/gmailieer/gmi push
51895 ttys003 0:00.00 grep --color=auto gmi
6208 ttys002 0:00.76 tail -f /tmp/gmailieer_patchwork_notmuch.log
51912 ttys003 0:00.00 grep --color=auto notmuch
$ kill 48742
Any idea what causes the hang? In my old offlineimap cron script I used to
have to manually kill offlineimap before invoking it. I'd like to avoid
that kind of nastiness if possible :-) with gmi. Thanks!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#83>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADd-zayf4BZVyF7idVInlJ5wwkraiZ7ks5t3shvgaJpZM4UTnLb>
.
|
Just noticed that you do also capture stderr. Let me know how the timeout
works out.
ons. 30. mai 2018 kl. 19:35 skrev Gaute Hope <eg@gaute.vetsj.com>:
… Do you get more output if you also redirect the stderr to your log?
Do you have a timeout set for gmail api callbacks? See gmi set. I don’t
see what else could hang without exception, maybe the notmuch dB - but that
should eventually unlock!
ons. 30. mai 2018 kl. 18:27 skrev Michael Turquette <
***@***.***>:
> I have a cron job (documented in #82
> <#82> ) that calls gmi pull,
> modifies some tags and then calls gmi push. For debug purposes I print
> out date(1) in between each of those steps.
>
> Now that the unicode character issue is resolved my cron job is almost
> working reliably! I still get occasional hangs in gmi push.
>
> Below is the log for when it failed yesterday. You can see after the
> first date that we start to gmi pull and receive the "done" at the end
> of the "receiving metadata" progress bar. The second date is an initial
> tagging script that we can ignore. The third date shows the call to gmi
> push. We fail to receive the "done" at the end o the metadata fetch, and
> then we get a lock collision:
>
> Tue May 29 13:00:03 PDT 2018
> pull: partial synchronization.. (hid: 40921234)
> fetching changes ...done: 26 its in 00.000s
> resolving changes (26) .....done: 26 its in 00.000s
> receiving content (21) ...remote: could not find remote message: 163ad76a8d4e57cb!
> ..done: 20 its in 02.145s
> current historyId: 40921998
> Tue May 29 13:00:16 PDT 2018
> Tue May 29 13:14:12 PDT 2018
> receiving metadata (4632) ............................................................................................................
>
> Tue May 29 13:31:19 PDT 2018
> Traceback (most recent call last):
> File "/Users/mturquette/src/gmailieer/lieer/local.py", line 172, in load_repository
> fcntl.lockf (self.lckf, fcntl.LOCK_EX | fcntl.LOCK_NB)
> BlockingIOError: [Errno 35] Resource temporarily unavailable
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/Users/mturquette/src/gmailieer/gmi", line 8, in <module>
> g.main ()
> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 149, in main
> args.func (args)
> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 314, in pull
> self.setup (args, args.dry_run, True)
> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 202, in setup
> self.local.load_repository ()
> File "/Users/mturquette/src/gmailieer/lieer/local.py", line 174, in load_repository
> raise Local.RepositoryException ("failed to lock repository (probably in use by another gmi instance)")
> lieer.local.RepositoryException: failed to lock repository (probably in use by another gmi instance)
>
> Any thoughts on what is going wrong? After the above sequence occurs, the
> push process hangs forever and must be manually killed. When this happens
> in the evening I have hours worth of logs showing that the whole process
> fails due to the lock collision. Each morning I invariably have to kill the gmi
> push process manually. Usually something like:
>
> $ ps -e | grep gmi; ps -e | grep notmuch
> 48742 ?? 0:18.09 /usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python /Users/mturquette/src/gmailieer/gmi push
> 51895 ttys003 0:00.00 grep --color=auto gmi
> 6208 ttys002 0:00.76 tail -f /tmp/gmailieer_patchwork_notmuch.log
> 51912 ttys003 0:00.00 grep --color=auto notmuch
> $ kill 48742
>
> Any idea what causes the hang? In my old offlineimap cron script I used
> to have to manually kill offlineimap before invoking it. I'd like to avoid
> that kind of nastiness if possible :-) with gmi. Thanks!
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#83>, or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AADd-zayf4BZVyF7idVInlJ5wwkraiZ7ks5t3shvgaJpZM4UTnLb>
> .
>
|
```
$ cat .gmailieer.json
{"last_historyId": 40978073, "lastmod": 15227766, "replace_slash_with_dot":
false, "account": "mturquette@baylibre.com", "timeout": 0,
"drop_non_existing_label": false, "ignore_tags": ["new"]}⏎
```
I'm using defaults in macos: `net.inet.tcp.keepidle: 7200000`
I guess the next step is instrumenting the code to show exactly which
resource is not free. I'll probably get around to that some day when this
annoys me enough.
…On Wed, May 30, 2018 at 11:14 AM Gaute Hope ***@***.***> wrote:
Just noticed that you do also capture stderr. Let me know how the timeout
works out.
ons. 30. mai 2018 kl. 19:35 skrev Gaute Hope ***@***.***>:
> Do you get more output if you also redirect the stderr to your log?
>
> Do you have a timeout set for gmail api callbacks? See gmi set. I don’t
> see what else could hang without exception, maybe the notmuch dB - but
that
> should eventually unlock!
>
> ons. 30. mai 2018 kl. 18:27 skrev Michael Turquette <
> ***@***.***>:
>
>> I have a cron job (documented in #82
>> <#82> ) that calls gmi pull,
>> modifies some tags and then calls gmi push. For debug purposes I print
>> out date(1) in between each of those steps.
>>
>> Now that the unicode character issue is resolved my cron job is almost
>> working reliably! I still get occasional hangs in gmi push.
>>
>> Below is the log for when it failed yesterday. You can see after the
>> first date that we start to gmi pull and receive the "done" at the end
>> of the "receiving metadata" progress bar. The second date is an initial
>> tagging script that we can ignore. The third date shows the call to gmi
>> push. We fail to receive the "done" at the end o the metadata fetch, and
>> then we get a lock collision:
>>
>> Tue May 29 13:00:03 PDT 2018
>> pull: partial synchronization.. (hid: 40921234)
>> fetching changes ...done: 26 its in 00.000s
>> resolving changes (26) .....done: 26 its in 00.000s
>> receiving content (21) ...remote: could not find remote message:
163ad76a8d4e57cb!
>> ..done: 20 its in 02.145s
>> current historyId: 40921998
>> Tue May 29 13:00:16 PDT 2018
>> Tue May 29 13:14:12 PDT 2018
>> receiving metadata (4632)
............................................................................................................
>>
>> Tue May 29 13:31:19 PDT 2018
>> Traceback (most recent call last):
>> File "/Users/mturquette/src/gmailieer/lieer/local.py", line 172, in
load_repository
>> fcntl.lockf (self.lckf, fcntl.LOCK_EX | fcntl.LOCK_NB)
>> BlockingIOError: [Errno 35] Resource temporarily unavailable
>>
>> During handling of the above exception, another exception occurred:
>>
>> Traceback (most recent call last):
>> File "/Users/mturquette/src/gmailieer/gmi", line 8, in <module>
>> g.main ()
>> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 149, in
main
>> args.func (args)
>> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 314, in
pull
>> self.setup (args, args.dry_run, True)
>> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 202, in
setup
>> self.local.load_repository ()
>> File "/Users/mturquette/src/gmailieer/lieer/local.py", line 174, in
load_repository
>> raise Local.RepositoryException ("failed to lock repository (probably
in use by another gmi instance)")
>> lieer.local.RepositoryException: failed to lock repository (probably in
use by another gmi instance)
>>
>> Any thoughts on what is going wrong? After the above sequence occurs,
the
>> push process hangs forever and must be manually killed. When this
happens
>> in the evening I have hours worth of logs showing that the whole process
>> fails due to the lock collision. Each morning I invariably have to kill
the gmi
>> push process manually. Usually something like:
>>
>> $ ps -e | grep gmi; ps -e | grep notmuch
>> 48742 ?? 0:18.09
/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python
/Users/mturquette/src/gmailieer/gmi push
>> 51895 ttys003 0:00.00 grep --color=auto gmi
>> 6208 ttys002 0:00.76 tail -f /tmp/gmailieer_patchwork_notmuch.log
>> 51912 ttys003 0:00.00 grep --color=auto notmuch
>> $ kill 48742
>>
>> Any idea what causes the hang? In my old offlineimap cron script I used
>> to have to manually kill offlineimap before invoking it. I'd like to
avoid
>> that kind of nastiness if possible :-) with gmi. Thanks!
>>
>> —
>> You are receiving this because you are subscribed to this thread.
>> Reply to this email directly, view it on GitHub
>> <#83>, or mute the thread
>> <
https://github.com/notifications/unsubscribe-auth/AADd-zayf4BZVyF7idVInlJ5wwkraiZ7ks5t3shvgaJpZM4UTnLb
>
>> .
>>
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#83 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAy25zcnVMg7H7z_RSbsSZocJhLeVx3ks5t3uGggaJpZM4UTnLb>
.
|
Zero timeout means forever, maybe try and set it to a minute or something?
tor. 31. mai 2018 kl. 03:41 skrev Michael Turquette <
notifications@github.com>:
… ```
$ cat .gmailieer.json
{"last_historyId": 40978073, "lastmod": 15227766, "replace_slash_with_dot":
false, "account": ***@***.***", "timeout": 0,
"drop_non_existing_label": false, "ignore_tags": ["new"]}⏎
```
I'm using defaults in macos: `net.inet.tcp.keepidle: 7200000`
I guess the next step is instrumenting the code to show exactly which
resource is not free. I'll probably get around to that some day when this
annoys me enough.
On Wed, May 30, 2018 at 11:14 AM Gaute Hope ***@***.***>
wrote:
> Just noticed that you do also capture stderr. Let me know how the timeout
> works out.
>
> ons. 30. mai 2018 kl. 19:35 skrev Gaute Hope ***@***.***>:
>
> > Do you get more output if you also redirect the stderr to your log?
> >
> > Do you have a timeout set for gmail api callbacks? See gmi set. I don’t
> > see what else could hang without exception, maybe the notmuch dB - but
> that
> > should eventually unlock!
> >
> > ons. 30. mai 2018 kl. 18:27 skrev Michael Turquette <
> > ***@***.***>:
> >
> >> I have a cron job (documented in #82
> >> <#82> ) that calls gmi
pull,
> >> modifies some tags and then calls gmi push. For debug purposes I print
> >> out date(1) in between each of those steps.
> >>
> >> Now that the unicode character issue is resolved my cron job is almost
> >> working reliably! I still get occasional hangs in gmi push.
> >>
> >> Below is the log for when it failed yesterday. You can see after the
> >> first date that we start to gmi pull and receive the "done" at the end
> >> of the "receiving metadata" progress bar. The second date is an
initial
> >> tagging script that we can ignore. The third date shows the call to
gmi
> >> push. We fail to receive the "done" at the end o the metadata fetch,
and
> >> then we get a lock collision:
> >>
> >> Tue May 29 13:00:03 PDT 2018
> >> pull: partial synchronization.. (hid: 40921234)
> >> fetching changes ...done: 26 its in 00.000s
> >> resolving changes (26) .....done: 26 its in 00.000s
> >> receiving content (21) ...remote: could not find remote message:
> 163ad76a8d4e57cb!
> >> ..done: 20 its in 02.145s
> >> current historyId: 40921998
> >> Tue May 29 13:00:16 PDT 2018
> >> Tue May 29 13:14:12 PDT 2018
> >> receiving metadata (4632)
>
............................................................................................................
> >>
> >> Tue May 29 13:31:19 PDT 2018
> >> Traceback (most recent call last):
> >> File "/Users/mturquette/src/gmailieer/lieer/local.py", line 172, in
> load_repository
> >> fcntl.lockf (self.lckf, fcntl.LOCK_EX | fcntl.LOCK_NB)
> >> BlockingIOError: [Errno 35] Resource temporarily unavailable
> >>
> >> During handling of the above exception, another exception occurred:
> >>
> >> Traceback (most recent call last):
> >> File "/Users/mturquette/src/gmailieer/gmi", line 8, in <module>
> >> g.main ()
> >> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 149,
in
> main
> >> args.func (args)
> >> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 314,
in
> pull
> >> self.setup (args, args.dry_run, True)
> >> File "/Users/mturquette/src/gmailieer/lieer/gmailieer.py", line 202,
in
> setup
> >> self.local.load_repository ()
> >> File "/Users/mturquette/src/gmailieer/lieer/local.py", line 174, in
> load_repository
> >> raise Local.RepositoryException ("failed to lock repository (probably
> in use by another gmi instance)")
> >> lieer.local.RepositoryException: failed to lock repository (probably
in
> use by another gmi instance)
> >>
> >> Any thoughts on what is going wrong? After the above sequence occurs,
> the
> >> push process hangs forever and must be manually killed. When this
> happens
> >> in the evening I have hours worth of logs showing that the whole
process
> >> fails due to the lock collision. Each morning I invariably have to
kill
> the gmi
> >> push process manually. Usually something like:
> >>
> >> $ ps -e | grep gmi; ps -e | grep notmuch
> >> 48742 ?? 0:18.09
>
/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python
> /Users/mturquette/src/gmailieer/gmi push
> >> 51895 ttys003 0:00.00 grep --color=auto gmi
> >> 6208 ttys002 0:00.76 tail -f /tmp/gmailieer_patchwork_notmuch.log
> >> 51912 ttys003 0:00.00 grep --color=auto notmuch
> >> $ kill 48742
> >>
> >> Any idea what causes the hang? In my old offlineimap cron script I
used
> >> to have to manually kill offlineimap before invoking it. I'd like to
> avoid
> >> that kind of nastiness if possible :-) with gmi. Thanks!
> >>
> >> —
> >> You are receiving this because you are subscribed to this thread.
> >> Reply to this email directly, view it on GitHub
> >> <#83>, or mute the thread
> >> <
>
https://github.com/notifications/unsubscribe-auth/AADd-zayf4BZVyF7idVInlJ5wwkraiZ7ks5t3shvgaJpZM4UTnLb
> >
> >> .
> >>
> >
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#83 (comment)>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/AAAy25zcnVMg7H7z_RSbsSZocJhLeVx3ks5t3uGggaJpZM4UTnLb
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#83 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADd-6gwEnxsBruMAxZFqXU4O2O467_Iks5t30pdgaJpZM4UTnLb>
.
|
Gaute Hope writes on May 31, 2018 7:21:
Zero timeout means forever, maybe try and set it to a minute or something?
Hi Micael, did you get to try with a higher timeout?
|
I have just set it to 1800, which I assume means 30 minutes (corresponding to the frequency of my cron job). According to the usage output, a value of 0 means to use the system timeout, which is not forever on macos, but instead 2 hours IIUC. Can you clarify?
I'll update this ticket on Friday and let you know if the problem is resolved. Thanks! |
Michael Turquette writes on June 5, 2018 18:53:
I have just set it to 1800, which I assume means 30 minutes (corresponding to the frequency of my cron job).
According to the usage output, a value of 0 means to use the system timeout, which is not forever on macos, but instead 2 hours IIUC. Can you clarify?
I can see why you request some clarification. I cannot find any complete
docs for the timeout option for httplib2 anymore - if anyone discovers
it please let me know. It seems that socket, which I assume is the
underlying layer, has some platform dependent behavior here.
Either way, I would recommend to set the timeout to be less than the period
of your cron jobs, with at least the resolution of crons sampling (which
must be less than one minute, probably one second). So <29 minutes.
|
I think it is the same as the socket timeout. Also: httplib2 docs. |
Thanks for the info. I considered reducing the period to be less than
my cron job and you've convinced me to do just that. It looks like
must pull/push cycles take only 1-2 minutes, so I've set the timeout
to 1200 seconds (20 minutes).
Looking through my logs since 5am on June 5 I haven't had a timeout
event. I'll keep watching the log this week and report back on Friday
here one way or the other.
…On Wed, Jun 6, 2018 at 12:51 PM Gaute Hope ***@***.***> wrote:
I think it is the same as the socket timeout. Also: [httplib2 docs](http://httplib2.readthedocs.io/en/latest/libhttplib2.html].
httplib2/httplib2#104
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Great! Looking at the `socket` docs, a value of None can also be passed
to timeout. Perhaps 0 = forever, None = system, number greater than zero
= timeout in seconds.
Michael Turquette writes on June 6, 2018 23:20:
… Thanks for the info. I considered reducing the period to be less than
my cron job and you've convinced me to do just that. It looks like
must pull/push cycles take only 1-2 minutes, so I've set the timeout
to 1200 seconds (20 minutes).
Looking through my logs since 5am on June 5 I haven't had a timeout
event. I'll keep watching the log this week and report back on Friday
here one way or the other.
On Wed, Jun 6, 2018 at 12:51 PM Gaute Hope ***@***.***> wrote:
>
> I think it is the same as the socket timeout. Also: [httplib2 docs](http://httplib2.readthedocs.io/en/latest/libhttplib2.html].
>
> httplib2/httplib2#104
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub, or mute the thread.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#83 (comment)
|
I have not received a `BlockingIOError: [Errno 35] Resource temporarily unavailable` error in my logs since adding a timeout. I'm happy to consider this closed. Thanks for the help debugging!
I do wonder if a sane default is a good idea? That's above my pay grade though ;-)
…On Wed, Jun 6, 2018 at 11:14 PM Gaute Hope ***@***.***> wrote:
Great! Looking at the `socket` docs, a value of None can also be passed
to timeout. Perhaps 0 = forever, None = system, number greater than zero
= timeout in seconds.
Michael Turquette writes on June 6, 2018 23:20:
> Thanks for the info. I considered reducing the period to be less than
> my cron job and you've convinced me to do just that. It looks like
> must pull/push cycles take only 1-2 minutes, so I've set the timeout
> to 1200 seconds (20 minutes).
>
> Looking through my logs since 5am on June 5 I haven't had a timeout
> event. I'll keep watching the log this week and report back on Friday
> here one way or the other.
> On Wed, Jun 6, 2018 at 12:51 PM Gaute Hope ***@***.***>
wrote:
>>
>> I think it is the same as the socket timeout. Also: [httplib2 docs](
http://httplib2.readthedocs.io/en/latest/libhttplib2.html].
>>
>> httplib2/httplib2#104
>>
>> —
>> You are receiving this because you authored the thread.
>> Reply to this email directly, view it on GitHub, or mute the thread.
>
>
> --
> You are receiving this because you commented.
> Reply to this email directly or view it on GitHub:
> #83 (comment)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#83 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAy28bJh0PejXGWdiZ4Jj_Y6twt2Z40ks5t6MS6gaJpZM4UTnLb>
.
|
fre. 8. jun. 2018 kl. 20:26 skrev Michael Turquette <
notifications@github.com>:
I have not received a `BlockingIOError: [Errno 35] Resource temporarily
unavailable` error in my logs since adding a timeout. I'm happy to consider
this closed. Thanks for the help debugging!
I do wonder if a sane default is a good idea? That's above my pay grade
though ;-)
Great! I wonder if the exception is not properly passed through the google
api in this case - at any rate gmi should crash, and not freeze, on the
exception. Maybe some async handling.. other types of http exceptions pass
through correctly.
|
The timeout parameter to httplib2 is passed as-is to the underlying
socket [0]. However, we are already using 0 to mean None [3] - which
translates to forever or system error (which could be connection timed
out!) [1][2].
Actually passing 0 (non-blocking) would be harmful for httplib2 and
probably the gmail api, so that is not supported.
I've updated the help message to also include system error (3012f56).
[0] httplib2/httplib2#104
[1] https://docs.python.org/3/library/socket.html#notes-on-socket-timeouts
[2] https://docs.python.org/3/library/socket.html#socket.socket.settimeout
[3] https://github.com/gauteh/gmailieer/blob/master/lieer/remote.py#L376
|
I have a cron job (documented in #82 ) that calls
gmi pull
, modifies some tags and then callsgmi push
. For debug purposes I print outdate(1)
in between each of those steps.Now that the unicode character issue is resolved my cron job is almost working reliably! I still get occasional hangs in gmi push.
Below is the log for when it failed yesterday. You can see after the first
date
that we start togmi pull
and receive the "done" at the end of the "receiving metadata" progress bar. The seconddate
is an initial tagging script that we can ignore. The thirddate
shows the call togmi push
. We fail to receive the "done" at the end o the metadata fetch, and then we get a lock collision:Any thoughts on what is going wrong? After the above sequence occurs, the push process hangs forever and must be manually killed. When this happens in the evening I have hours worth of logs showing that the whole process fails due to the lock collision. Each morning I invariably have to kill the
gmi push
process manually. Usually something like:Any idea what causes the hang? In my old offlineimap cron script I used to have to manually kill offlineimap before invoking it. I'd like to avoid that kind of nastiness if possible :-) with gmi. Thanks!
The text was updated successfully, but these errors were encountered: