You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This method also needs to watch the current batch payload size, as RP server refuses requests that exceed the pre-configured maximum payload size:
ResponseError(error_messages[0])
16:10:24 reportportal_client.errors.ResponseError: 5000: Unclassified error [Maximum upload size of 67108864 bytes exceeded; nested exception is org.apache.commons.fileupload.FileUploadBase$FileSizeLimitExceededException: The field json_request_part exceeds its maximum permitted size of 67108864 bytes.]
Steps to Reproduce
Steps to reproduce the behavior:
either set the log_batch_size to a really high value and try to log a lot of messages or try to log a really long message (tl;dr generate a payload larger than a max allowed payload size on a server - default is probably 67108864 bytes)
_log_process function should watch this and "flush" (send) the batch before appending another log message in case the payload is about to be exceeded.
Figure out a sane way of handling a single log that itself exceeds the limit - maybe some string-splitting strategy could apply with a WARNING message being logged.
Actual behavior
the client crashes with raising the ResponseError
Package versions 5.2.0
The text was updated successfully, but these errors were encountered:
Describe the bug
We currently only issue the batch logging after the configured batch_log_size is about to be exceeded.
https://github.com/reportportal/client-Python/blob/master/reportportal_client/core/log_manager.py#L72
This method also needs to watch the current batch payload size, as RP server refuses requests that exceed the pre-configured maximum payload size:
Steps to Reproduce
Steps to reproduce the behavior:
log_batch_size
to a really high value and try to log a lot of messages or try to log a really long message (tl;dr generate a payload larger than a max allowed payload size on a server - default is probably67108864 bytes
)Expected behavior
implement configurable parameter:
log_batch_payload_size
._log_process function should watch this and "flush" (send) the batch before appending another log message in case the payload is about to be exceeded.
Figure out a sane way of handling a single log that itself exceeds the limit - maybe some string-splitting strategy could apply with a WARNING message being logged.
Actual behavior
the client crashes with raising the
ResponseError
Package versions
5.2.0
The text was updated successfully, but these errors were encountered: