Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SQS ReceiveMessages crashes with 413 REQUEST ENTITY TOO LARGE #1038

Closed
marshally opened this issue May 25, 2022 · 2 comments · Fixed by #1044
Closed

SQS ReceiveMessages crashes with 413 REQUEST ENTITY TOO LARGE #1038

marshally opened this issue May 25, 2022 · 2 comments · Fixed by #1044
Assignees
Labels
bug Something isn't working

Comments

@marshally
Copy link

What version of OpenTelemetry are you using?

❯ grep opentelemetry package.json
    "@opentelemetry/api": "^1.1.0",
    "@opentelemetry/auto-instrumentations-node": "^0.28.0",
    "@opentelemetry/exporter-trace-otlp-grpc": "^0.28.0",
    "@opentelemetry/resources": "^1.2.0",
    "@opentelemetry/semantic-conventions": "^1.0.1",
    "@opentelemetry/sdk-node": "^0.28.0",
    "@opentelemetry/sdk-trace-base": "^1.2.0",

What version of Node are you using?

14.15.4

What did you do?

Example code/test case

https://github.com/marshally/aws_sdk_sqs_message_attributes_test/

Text description

We have an sqs consumer process which contains a while loop that calls ReceiveMessages over and over.

async function sqsConsumer() {
  while (true) {
    const resp = await sqs.receiveMessage(receiveMessageConfig).promise();

    const messages = resp.Messages || [];
    console.log(messages)

    if (messages.length) {
      received_message_ids = messages.map(({ ReceiptHandle }, k) => ({
        Id: k.toString(),
        ReceiptHandle,
      }));

      // do stuff

      const deletionPromise = sqs
            .deleteMessageBatch({
              QueueUrl: receiveMessageConfig.QueueUrl,
              Entries: received_message_ids,
            })
            .promise();

      await deletionPromise;
    }
  }
}

What did you expect to see?

I expected my loop to run indefinitely without crashing.

What did you see instead?

After some hours of running, the process crashes because AWS returns 413 REQUEST ENTITY TOO LARGE

Additional context

Digging deeper, we found that @opentelemetry-instrumentation-aws-sdk is automatically adding new MessageAttributes to each call to ReceiveMessages. When those MessageAttributes are larger than 265KB, AWS returns 413

$ node index.js
writing tracing data to NoopSpanProcessor

ReceiveMessage
HTTP BODY DEBUG LOG ------> Action=ReceiveMessage&AttributeName.1=SentTimestamp&MaxNumberOfMessages=1&MessageAttributeName.1=All&QueueUrl=https%3A%2F%2Fsqs.us-east-1.amazonaws.com%2F623766430081%2FMCY-API-SQS&Version=2012-11-05&VisibilityTimeout=20&WaitTimeSeconds=1
Tracing initialized

ReceiveMessage
HTTP BODY DEBUG LOG ------> Action=ReceiveMessage&AttributeName.1=SentTimestamp&MaxNumberOfMessages=1&MessageAttributeName.1=All&MessageAttributeName.2=traceparent&MessageAttributeName.3=tracestate&MessageAttributeName.4=baggage&QueueUrl=https%3A%2F%2Fsqs.us-east-1.amazonaws.com%2F623766430081%2FMCY-API-SQS&Version=2012-11-05&VisibilityTimeout=20&WaitTimeSeconds=1

ReceiveMessage
HTTP BODY DEBUG LOG ------> Action=ReceiveMessage&AttributeName.1=SentTimestamp&MaxNumberOfMessages=1&MessageAttributeName.1=All&MessageAttributeName.2=traceparent&MessageAttributeName.3=tracestate&MessageAttributeName.4=baggage&MessageAttributeName.5=traceparent&MessageAttributeName.6=tracestate&MessageAttributeName.7=baggage&QueueUrl=https%3A%2F%2Fsqs.us-east-1.amazonaws.com%2F623766430081%2FMCY-API-SQS&Version=2012-11-05&VisibilityTimeout=20&WaitTimeSeconds=1
@marshally marshally added the bug Something isn't working label May 25, 2022
@blumamir
Copy link
Member

Digging deeper, we found that @opentelemetry-instrumentation-aws-sdk is automatically adding new MessageAttributes to each call to ReceiveMessages

This is expected behavior, as the MessageAttributes is how the instrumentation is used to propagate remote context.

When those MessageAttributes are larger than 265KB, AWS returns 413

I guess this is the problem:

          request.commandInput.MessageAttributeNames = (
            request.commandInput.MessageAttributeNames ?? []
          ).concat(propagation.fields());

This code just concats the propagation fields, but if the same request is reused, then they will be concatenated again and again until they reach AWS limit.

Thanks for finding and reporting this. Would you like to contribute a fix?

@marshally
Copy link
Author

Would you like to contribute a fix?

I'll try!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
5 participants