-
Notifications
You must be signed in to change notification settings - Fork 45
8 RESOURCE_EXHAUSTED: Bandwidth exhausted #490
Comments
I am unable to reproduce this. It seems unlike but can you check your quota to make sure you aren't over? |
I've checked our quotas and we're within limits on all of them.
This still leads to the same error:
I attempted to add a 250ms timeout between each chunk, and reduce the chunk size to 100. That just resulted in a new error:
All of these errors stem from |
@BeauAgst mind sharing your actual Cloud Function code, what jumps out at me is this: ;(async() => {
const tasks = [
{
country: 'GB',
url: 'https://www.reasonably-lengthy-domain-uri.com/path/name?and=query¶ms=true'
}
/* array of tasks over 2k in length */
]
const createdTasks = await Promise.all(R.map(createTask)(tasks))
})() In the global scope. Cloud Functions only allocate compute/memory resources while they're handling a request. So, if you perform asynchronous work outside of this context, it can lead to broken behavior as described. |
Sure. I broke that down as a quick-to-reproduce example. Here's the actual code: const R = require('ramda')
const { createTask } = require('./clients/tasks.client')
const spreadCountries = ({ countries, url }) => R.map(country => ({ country, url }))(countries)
exports.handle = async (req, res) => {
res.header('Content-Type','application/json')
res.header('Access-Control-Allow-Origin', '*')
res.header('Access-Control-Allow-Headers', 'Content-Type')
res.set('Access-Control-Allow-Origin', "*")
res.set('Access-Control-Allow-Methods', 'POST')
if (req.method === 'OPTIONS') {
res.status(204).send('')
}
if (req.method !== 'POST') {
return res.status(405).end()
}
try {
const tasks = R.pipe(
R.prop('body'),
R.map(spreadCountries),
R.flatten,
R.map(createTask),
)(req)
await Promise.all(tasks)
return res.status(200).send({ message: 'OK' })
} catch (error) {
return res.status(400).send({ message: 'Problem running tasks', error: error.message })
}
} And then the imported const { CloudTasksClient } = require('@google-cloud/tasks')
const R = require('ramda')
const client = new CloudTasksClient()
const {
GCLOUD_PROJECT_ID,
GCLOUD_QUEUE_NAME,
GCLOUD_QUEUE_LOCATION,
TASK_HANDLER_URL,
SERVICE_ACCOUNT_EMAIL,
} = process.env
const CALL_OPTIONS = {
timeout: 30000
}
const createTask = function createTask(payload) {
const parent = client.queuePath(GCLOUD_PROJECT_ID, GCLOUD_QUEUE_LOCATION, GCLOUD_QUEUE_NAME)
const task = {
httpRequest: {
httpMethod: 'POST',
url: TASK_HANDLER_URL,
oidcToken: {
serviceAccountEmail: SERVICE_ACCOUNT_EMAIL,
},
headers: {
'content-type': 'application/json',
},
body: R.pipe(
JSON.stringify,
Buffer.from,
buff => buff.toString('base64')
)(payload),
},
}
return client.createTask({ parent, task }, CALL_OPTIONS)
}
module.exports = { createTask } |
@BeauAgst I wonder if it might be the case that: const taskGroups = R.pipe(
R.map(createTask),
R.splitEvery(500),
)(tasks) Creates all the outstanding promises, even tough you then iterate over them in chunks: for (tasks of taskGroups) {
await Promise.all(tasks)
} Try flipping the logic on its head a git, so that you do this: for (tasks of taskGroups) {
const tasks = // get chunk of N tasks.
await Promise.all(tasks)
} |
Hi @BeauAgst I am closing this issue, but please feel free to reopen if the last tip didn't work. |
I have same issue, I've checked the quota but they're all within limit. |
Same issue here. |
Same issue. I saw some docs here mentioning that it can be triggered when reaching the 10 QPS (query per second??) but the official quotas and limits page says 6,000,000 requests per minute (or 100,000 per second), but I'm not reaching the second one, I'm trying to create about 5,000 tasks in a second, but far from the 100,000, and can find how to hack it. |
Just sharing here, not ideal, but I managed to get it "working" for me, adding a delay before each request and its ok now. Before: const pagesArr = [...Array(totalPages).keys()];
await Promise.all(
pagesArr.map(async (page) => {
const waitFor = Math.ceil(page / distribute);
await this.schedulePage(campaign, page, waitFor);
})
); After: const pagesArr = [...Array(totalPages).keys()];
let delay = 0;
const delayIncrement = 5;
await Promise.all(
pagesArr.map(async (page) => {
const waitFor = Math.ceil(page / distribute);
delay += delayIncrement;
return new Promise((resolve) => setTimeout(resolve, delay)).then(async () => {
await this.schedulePage(campaign, page, waitFor);
});
}),
); So I can confirm that I'm far from the official 100,000/s quota but probably reaching that 10 QPS: nodejs-tasks/src/v2beta2/cloud_tasks_client.ts Line 1672 in 176dcee
this is the code for async schedulePage(campaign: Campaign, page: number, waitFor: number) {
if (this.debugMode) return { campaign, page };
try {
const payload = JSON.stringify({ campaign, page });
console.log('Log - running page', page);
const parent = this.cloudTasksClient.queuePath(this.project, this.location, this.queueName);
const task = this.createTaskRequest(payload, waitFor, '');
const request = { parent, task };
const response = await this.cloudTasksClient.createTask(request);
// console.log('Log - response', response);
const [taskResponse] = response as [
google.cloud.tasks.v2.ITask,
google.cloud.tasks.v2.ICreateTaskRequest,
Record<string, unknown>,
];
const taskId = taskResponse?.name?.split('/').pop();
console.log(`Page ${page} scheduled with task id ${taskId}`);
return taskId;
} catch (error) {
console.error('ERROR:', error);
}
} |
Thanks for sharing @filipecrosk! Would like to add that another workaround that was mentioned in the sister issue as well. The original quote from that #397 thread.
Closing this issue for now. Will re-open if there are further occurrences of this even when the above flag has been set. |
Is this a client library issue or a product issue?
Client library issue. It seems that if I try to send too many issues via the
CloudTasksClient
, I get an8 RESOURCE_EXHAUSTED: Bandwidth exhausted
error.Did someone already solve this?
Along the same lines of the issue found here for
@google-cloud/datastore
Environment details
This has is happening via a Cloud Function
@google-cloud/tasks
version:2.1.2
Steps to reproduce
I've created a minimal reproduction as an example.
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
The text was updated successfully, but these errors were encountered: