-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Larger Safety Windows around Expiration Timestamp? #133
Comments
Before every request, we do a token grab, refreshing if necessary (that work is done by google-auth-library). With this large amount of requests, maybe we need a larger safety window around the expiration timestamp for when the auth client makes the grab. @jmdobry does that sound possible? (please re-tag if someone else is closer to the auth library code) |
I came across the same issue. It's not a big deal for me because of a fault-tolerant design of my app, but I think it should have very high priority.
|
I might be seeing the same thing - I'm using cloud functions to stream records into BigQuery in chunks of about 2,000 and when I run several instances of my function (which receive their records to put in BQ via PubSub) I am seeing this sporadically (like one run of the function fails like this and others don't):
I was also seeing a DNS quota run down issue which seems strange as my code doesn't make any HTTP calls, just whatever the pub sub and big query libraries do:
I have upped the DNS resolution limit to try and address at least this second issue, but I don't believe my code to be the cause of it. My code uses the
I'm wondering if my DNS issue was to do with where I am |
Well, moving my
and the DNS quota issue continues.
Revised code that is doing this (for some invocations of the cloud function but not all of them) looks like this:
|
Hi all, thanks for reporting. Question: Are any of you getting this error while using service accounts specifically? (We have a theory we are trying to disprove.) |
@lukesneeringer I am using Cloud Functions, so whatever they run as... I just changed my code to slow down the async calls to limit them to 5 concurrent calls. I no longer see the default credentials error but I do see DNS issues reported:
I changed code thus to only do max 5 parallel per cloud function invocation... there are several cloud function instances running concurrently putting data into same BQ dataset and table like this:
|
Any update on this, please? I can provide every debugging data you need, it happens at least few times every five minutes in my app. |
Hello, I am not 100% sure this is related, but I have a service that is using google pubsub. The worker pulling messages was in the middle of pulling a batch of around 600k (1 at a time), when suddenly failed with the same messaging here - Sadly for me, my code did not tolerate that, and the rest of the messages were left waiting until they were stale and we had to remove them. Does this at least match the profile of what is being investigated here? |
I'm publishing cloud pubsub on cloud functions and having trouble with this error frequently. Is there any workaround solution you can recommend? |
@merlinnot @ismailbaskin Can you elaborate on what you were doing, preferably with some code? |
Sure. I've implemented some syncing mechanism on top of GCF and PubSub. It creates a tree structure. The first function fetches credentials from all of the servers, then it publishes a PubSub message with server hostname and credentials. The second function checks which applications on each server should be updated. For each application, there are multiple users to synchronize, it publishes a "request" to sync data for a specific user. And so on, and so forth. There are seven levels of this structure since server's API is kinda tricky to use that way. The 6th function regularly throws this error:
It's not such a big issue for me, data is synced next time and no one gets hurt. But I get this error at least once every five minutes, so it shouldn't be that hard to track down for a Googler with access to my project and Google's internal logs. |
None of us have access to user data (Google is really protective of that stuff for obvious reasons). That said, we will do our best to reproduce and solve. :-) |
I'll give you access to my user data if it'll help :) Seeing this issue with Datastore and Pubsub too |
I am trying to replicate the issue by inserting random data into BigQuery, similar to how @simonprickett implemented his method, varying the limit. However, the error is too sporadic to be diagnosed (blocks of errors that comes up randomly every few hours). Is there anyone who is getting the errors more frequently? Any help to replicate the issue more frequently will be great. Perhaps someone using Pubsub can share some code? |
@bantini I got fed up with this and for this and some other reasons (lack of support for environment variables being one) have taken my code out of Cloud Functions and into AppEngine Flex instead, where so far, it's working OK. |
@bantini I get this error every five minutes from more than two months :) Maybe the issue occures when in the same time there are multiple invocations of a function which publishes some data to PubSub? |
@merlinnot What is the authentication mechanism that you are using? |
@bantini Standard env variables from GCF:
|
Fixed in #242 |
I guys, I am currently facing the same issue with Pub/Sub. What is the fix and how can I use that in my code? Will just updating my "@google-cloud/pubsub" do? Any help will be appreciated :) |
Just stumbled upon this issue, a DNS cache (axios/axios#1878) could help mitigate issues like that. |
From @MarkHerhold on March 27, 2017 0:7
Insert calls to BigQuery randomly result in auth errors.
I have a job that is responsible for sending data in bulk to BigQuery by using
.insert([ 'lots', 'of', 'items' ])
repeatedly in synchronous fashion. This process goes on for hours at a time, callinginsert
after the prior operation finishes.After a few hours, my calls to BigQuery abruptly fail. I'm not sure if this issue is time-related or the call itself randomly fails. It does seem to occur after a few hours. This happens repeatedly.
Error:
Environment details
@google-cloud/bigquery
version: 0.8.0Steps to reproduce
The above description tells it all, but I'm working on a script. I'll update this if I can find a way to reproduce in a more predictable manner.
Copied from original issue: googleapis/google-cloud-node#2139
The text was updated successfully, but these errors were encountered: