-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async CosmosDB client raises: 'Got more than 8190 bytes (11994) when reading Header value is too long.' #27625
Comments
Seems related to aio-libs/aiohttp#2304 |
Thanks for the feedback, we’ll investigate asap. |
Hi @roekoe-loterij, thank you for using our SDK and opening this issue. I do believe this might have to do with the issue that was linked above, definitely doesn't seem like something we touch directly in the SDK. I'm wondering, this is not an issue you have encountered with the sync client at all right? Trying to see if this is something that extends beyond aiohttp. |
Hi, we're sending this friendly reminder because we haven't heard back from you in a while. We need more information about this issue to help address it. Please be sure to give us your input within the next 7 days. If we don't hear back from you within 14 days of this comment the issue will be automatically closed. Thank you! |
Hello guys I encounter the same issue as the author, with a growing header size with a growing request length. |
Bit late with my feedback, but the error only occurs when using the aiohttp library as stated. It is directly related to header config options aiohttp sets by default. I'm pretty sure this error still occurs. We worked around it by patching the |
Re-opening this issue, as one way to fix this on Cosmos DB side is with the feature which allows users to set continuation token size limit. @xiangyan99 did we make any progress on this from aiohttp client params? |
@bambriz can you please take a look at this, thanks! |
We allow users/SDK developers to provide custom transport. You can customize aiohttp and use it when creating clients. |
Hi @roekoe-loterij, we're sending this friendly reminder because we haven't heard back from you in 7 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 14 days of this comment the issue will be automatically closed. Thank you! |
There is an open PR at the moment that will fix this issue by implementing continuation token limits to the python sdk. By setting the limit to be 8KB or under it will prevent this issue from occurring. You can check the progress of PR #30731 |
Leaving this open until the changes are available with this month's release. |
Changes are now available in version 4.4.1b1: https://pypi.org/project/azure-cosmos/4.4.1b1/ |
Just wanted to leave a note here that the name of the parameter for the continuation toke limit has changed. The value to use at this time seems to be: Setting it to 1 or 2 seems to work. |
Describe the bug
We recently switched some of our applications to the new async CosmosDB client that was released in azure-cosmos 4.3.0. Since then we started seeing this error when iterating over large result sets:
Got more than 8190 bytes (10904) when reading Header value is too long.
The error is thrown by aiohttp while fetching/parsing the next result on the AsyncItemPaged returned by a query
The unpredictable occurrence of this error makes the new async client more or less unusable
Exception or Stack Trace
Traceback (most recent call last):
File " ./aiohttp/client_reqrep.py", line 899, in start message, payload = await protocol.read() # type: ignore[union-attr]
File " ./aiohttp/streams.py", line 616, in read await self._waiter
File " ./aiohttp/client_proto.py", line 213, in data_received messages, upgraded, tail = self._parser.feed_data(data)
File "aiohttp/_http_parser.pyx", line 551, in aiohttp._http_parser.HttpParser.feed_data
File "aiohttp/_http_parser.pyx", line 721, in aiohttp._http_parser.cb_on_header_value aiohttp.http_exceptions.LineTooLong: 400, message='Got more than 8190 bytes (11994) when reading Header value is too long.'
The above exception was the direct cause of the following exception: Traceback (most recent call last):
File " ./azure/core/pipeline/transport/_aiohttp.py", line 229, in send result = await self.session.request( # type: ignore
File " ./aiohttp/client.py", line 560, in _request await resp.start(conn)
File " ./aiohttp/client_reqrep.py", line 901, in start raise ClientResponseError( aiohttp.client_exceptions.ClientResponseError: 400, message='Got more than 8190 bytes (11994) when reading Header value is too long.', url=URL('...')
(The above exception was the direct cause of the following exception: Traceback (most recent call last):
File "/home/site/wwwroot/common/az_functions/events_to_aggregate_container.py", line 80, in transform_and_send async for event in events_for_agg:
File " ./hermes/common/aio/wrapped_iterator.py", line 13, in anext return self.mapper(await self.delegate.anext())
File " ./azure/core/async_paging.py", line 154, in anext return await self.anext()
File " ./azure/core/async_paging.py", line 157, in anext self._page = await self._page_iterator.anext()
File " ./azure/core/async_paging.py", line 99, in anext self._response = await self._get_next(self.continuation_token)
File " ./azure/cosmos/aio/_query_iterable_async.py", line 102, in _fetch_next block = await self._ex_context.fetch_next_block()
File " ./azure/cosmos/_execution_context/aio/execution_dispatcher.py", line 89, in fetch_next_block return await self._execution_context.fetch_next_block()
File " ./azure/cosmos/_execution_context/aio/base_execution_context.py", line 82, in fetch_next_block return await self._fetch_next_block()
File " ./azure/cosmos/_execution_context/aio/base_execution_context.py", line 170, in _fetch_next_block return await self._fetch_items_helper_with_retries(self._fetch_function)
File " ./azure/cosmos/_execution_context/aio/base_execution_context.py", line 144, in _fetch_items_helper_with_retries return await _retry_utility_async.ExecuteAsync(self._client, self._client._global_endpoint_manager, callback)
File " ./azure/cosmos/aio/_retry_utility_async.py", line 81, in ExecuteAsync result = await ExecuteFunctionAsync(function, *args, **kwargs)
File " ./azure/cosmos/aio/_retry_utility_async.py", line 138, in ExecuteFunctionAsync return await function(*args, **kwargs)
File " ./azure/cosmos/_execution_context/aio/base_execution_context.py", line 142, in callback return await self._fetch_items_helper_no_retries(fetch_function)
File " ./azure/cosmos/_execution_context/aio/base_execution_context.py", line 125, in _fetch_items_helper_no_retries (fetched_items, response_headers) = await fetch_function(new_options)
File " ./azure/cosmos/aio/_cosmos_client_connection_async.py", line 1722, in fetch_fn await self.__QueryFeed(
File " ./azure/cosmos/aio/_cosmos_client_connection_async.py", line 2291, in __QueryFeed result, self.last_response_headers = await self.__Post(path, request_params, query, req_headers, **kwargs)
File " ./azure/cosmos/aio/_cosmos_client_connection_async.py", line 751, in __Post return await asynchronous_request.AsynchronousRequest(
File " ./azure/cosmos/aio/_asynchronous_request.py", line 175, in AsynchronousRequest return await _retry_utility_async.ExecuteAsync(
File " ./azure/cosmos/aio/_retry_utility_async.py", line 79, in ExecuteAsync result = await ExecuteFunctionAsync(function, global_endpoint_manager, *args, **kwargs)
File " ./azure/cosmos/aio/_retry_utility_async.py", line 138, in ExecuteFunctionAsync return await function(*args, **kwargs)
File " ./azure/cosmos/aio/_asynchronous_request.py", line 100, in _Request response = await _PipelineRunFunction(
File " ./azure/cosmos/aio/_asynchronous_request.py", line 141, in _PipelineRunFunction return await pipeline_client._pipeline.run(request, **kwargs)
File " ./azure/core/pipeline/_base_async.py", line 215, in run return await first_node.send(pipeline_request)
File " ./azure/core/pipeline/_base_async.py", line 83, in send response = await self.next.send(request) # type: ignore
File " ./azure/core/pipeline/_base_async.py", line 83, in send response = await self.next.send(request) # type: ignore
File " ./azure/core/pipeline/_base_async.py", line 83, in send response = await self.next.send(request) # type: ignore [Previous line repeated 1 more time]
File " ./azure/cosmos/aio/_retry_utility_async.py", line 194, in send raise err
File " ./azure/cosmos/aio/_retry_utility_async.py", line 171, in send response = await self.next.send(request)
File " ./azure/core/pipeline/_base_async.py", line 83, in send response = await self.next.send(request) # type: ignore
File " ./azure/core/pipeline/_base_async.py", line 83, in send response = await self.next.send(request) # type: ignore
File " ./azure/core/pipeline/_base_async.py", line 83, in send response = await self.next.send(request) # type: ignore [Previous line repeated 1 more time]
File " ./azure/core/pipeline/_base_async.py", line 116, in send await self._sender.send(request.http_request, **request.context.options),
File " ./azure/core/pipeline/transport/_aiohttp.py", line 255, in send raise ServiceResponseError(err, error=err) from err azure.core.exceptions.ServiceResponseError: 400, message='Got more than 8190 bytes (11994) when reading Header value is too long.', url=URL('...')
To Reproduce
Reproducing the error is not straightforward. We have seen cases where we can easily and without errors iterate over a result set returning 100k+ items while adding or changing a single constraint in the where clause can break the iteration after the first 1k+ results:
SELECT * FROM c
runs
SELECT * FROM c WHERE c.some_field = 'SomeValue'
fails
SELECT * FROM c WHERE lower(c.some_field) = 'somevalue'
runs
SELECT * FROM c WHERE StringEquals(c.some_field, 'SomeValue', true)
fails
Code Snippet
result = [item async for item in container.query_items(query="SELECT * FROM c", max_item_count=100000, enable_cross_partition_query=True)]
Additional info
Two years ago an apparently similar issue was raised for the Java SDK:
Azure/azure-sdk-for-java#6069
Setup:
The text was updated successfully, but these errors were encountered: