Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response content in custom RetryHandler remains compressed #2545

Closed
MichalLechowski opened this issue Jun 13, 2024 · 6 comments
Closed

Response content in custom RetryHandler remains compressed #2545

MichalLechowski opened this issue Jun 13, 2024 · 6 comments
Labels
Status: No recent activity status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close type:bug A broken experience

Comments

@MichalLechowski
Copy link

Describe the bug

When trying to read response content within custom RetryHandler, it remains compressed and needs to be manually decompressed. Default compression handler is called after it and it gets decompressed too late.

Expected behavior

Content should be already decompressed in custom RetryHandler

How to reproduce

Graph client initialization:

HttpClientHandler httpClientHandler = new HttpClientHandler();
CustomRetryHandler customRetryHandler = new CustomRetryHandler(httpClientHandler);
var handlers = GraphClientFactory.CreateDefaultHandlers();
var defaultRetryHandler = handlers.Where(h => h is RetryHandler).FirstOrDefault();
var retryHandlerIndex = handlers.IndexOf(defaultRetryHandler);
handlers.Remove(defaultRetryHandler);
handlers[retryHandlerIndex] = customRetryHandler;
var httpClient = GraphClientFactory.Create(handlers);
_graphServiceClient = new GraphServiceClient(httpClient, onBehalfOfCredential, scopes);

Custom RetryHandler where the problem is:

public class CustomRetryHandler : DelegatingHandler
{
    private readonly int _maxRetries;
    private readonly TimeSpan _baseDelay;

    public CustomRetryHandler(HttpMessageHandler innerHandler, int maxRetries = 5, TimeSpan? baseDelay = null)
        : base(innerHandler)
    {
        _maxRetries = maxRetries;
        _baseDelay = baseDelay ?? TimeSpan.FromSeconds(4);
    }

    protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {

            response = await base.SendAsync(request, cancellationToken);
       
           var content  = response.Content  // COMPRESSED

Content here is compressed and it requires manual decompression like below, which I think should not be necessary?

using (var compressedStream = new MemoryStream(response.Content.ReadAsByteArrayAsync().Result))
using (var zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
using (var resultStream = new MemoryStream())
{
    zipStream.CopyTo(resultStream);
    resultStream.Position = 0;
    var streamArray = resultStream.ToArray();
    var stringFromStream = System.Text.Encoding.Default.GetString(streamArray);
    var odataError = JsonConvert.DeserializeObject<ODataError>(stringFromStream);
}
    }

SDK Version

5.53.0

Latest version known to work for scenario above?

No response

Known Workarounds

Manual decompression

Debug output

Click to expand log ```
</details>


### Configuration

_No response_

### Other information

The problem seems to be order of handlers execution, default CompressionHandler gets called after RetryHandler. The default order of handlers is:
CompressionHandler
UriReplacementHandler`1[Microsoft.Kiota.Http.HttpClientLibrary.Middleware.Options.UriReplacementHandlerOption]
CustomRetryHandler (this is the same place as default RetryHandler)
ParametersNameDecodingHandler
UserAgentHandler
HeadersInspectionHandler
TelemetryHandler
@MichalLechowski MichalLechowski added status:waiting-for-triage An issue that is yet to be reviewed or assigned type:bug A broken experience labels Jun 13, 2024
@andrueastman
Copy link
Member

Thanks for raising this @MichalLechowski

The reading of content isn't expected by the default RetryHandler, and is currently placed after the CompressionHandler at the moment and therefore explains the scenario.

If you wish to get the uncompressed content, you could place the custom handler infront of the CompressionHandler so that you may be able to read the body without the workaround.

Out of curiosity, is there a specific reason you wish to read the response body in the handler? Ideally, the status code/headers should give enough info if you need retry..

@andrueastman andrueastman added status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close and removed status:waiting-for-triage An issue that is yet to be reviewed or assigned labels Jun 13, 2024
@MichalLechowski
Copy link
Author

MichalLechowski commented Jun 13, 2024

Actually there is an underneath issue which I overcome by reading an actual error rather than just status code. When I create multiple MS Teams channels, something like:

foreach (var channel in channelNames)
{
    await _graphServiceClient.Teams[teamId.ToString()]
                             .Channels
                             .PostAsync(new Channel() { DisplayName = channel },
                                 requestConfiguration => 
                                 requestConfiguration.Options.Add(new ResponseHandlerOption() { ResponseHandler = _nativeResponseHandler }));
}

I occasionally get BadRequest response, even though the resource was successfully created.
So for instance the flow is:

  1. Call create channel "Hello"
  2. TooManyRequests happen (let's say because I created 50 channels and it throttles now)
  3. RetryHandler tries again with set backoff delay
  4. BadRequest happen

But when I take a look at MS Teams, that channel got created successfully, so it's a misleading BadRequest and I should stop retrying to create already existing channel. So I wanted to know what is inside the error and it turned out that after decompression and deserialization, the error is "Channel name already exists", which means it is retrying that request even though the channel got successfully created. Which proved my suspicion. The problem is I didn't get successful code, I got 429, then 400, which resulted in channel created.

I have no idea why it behaves this way. if you have any idea what might be wrong, please let me know.

EXAMPLE:
CLICK

RetryHandler logic for it:

if (response.IsSuccessStatusCode)
{
    Console.WriteLine($"Success after: {attempt} attempt(-s)");
    return response;
}
else
{
    Console.WriteLine($"Request to {request.RequestUri} failed with status code {response.StatusCode}. Attempt {attempt} of {_maxRetries}");

    if (response.StatusCode == (HttpStatusCode)429) // Too Many Requests
    {
        var retryAfter = response.Headers.RetryAfter?.Delta ?? GetExponentialBackoffDelay(attempt);
        Console.WriteLine($"Retry-After: {retryAfter.TotalSeconds} seconds. Retrying after this time... Attempt {attempt} of {_maxRetries}.");
        await Task.Delay(retryAfter, cancellationToken);
    }
    else
    {
        using (var compressedStream = new MemoryStream(response.Content.ReadAsByteArrayAsync().Result))
        using (var zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
        using (var resultStream = new MemoryStream())
        {
            zipStream.CopyTo(resultStream);
            resultStream.Position = 0;
            var streamArray = resultStream.ToArray();
            var stringFromStream = System.Text.Encoding.Default.GetString(streamArray);
            var odataError = JsonConvert.DeserializeObject<ODataError>(stringFromStream);
            if (odataError.Message == "Channel name already existed, please use other name")
            {
                Console.WriteLine("Channel created successfully even though 429 occured.");
                break;
            }
        }

        var delay = GetExponentialBackoffDelay(attempt);
        Console.WriteLine($"Calculated exponential backoff delay: {delay.TotalSeconds} seconds for attempt {attempt}");
        await Task.Delay(delay, cancellationToken);
    }
}

@microsoft-github-policy-service microsoft-github-policy-service bot added Needs: Attention 👋 and removed status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close labels Jun 13, 2024
@andrueastman
Copy link
Member

Apologies for the delayed response @MichalLechowski

Is there a specific reason not do something like this? Using pattern matching you can filter out the error scenario as below.

  try
  {
      await graphClient.Teams[teamId.ToString()]
          .Channels
          .PostAsync(new Channel() { DisplayName = channel });
  }
  catch (ODataError error) when (error.Message.Equals("Channel name already existed, please use other name",
                                     StringComparison.OrdinalIgnoreCase))
  {
      //Do nothing as the channel was created in a back off
  }
  catch (ODataError error)
  {
      await Console.Error.WriteLineAsync(error.Message);
  }

Also, would you be willing to submit feedback on the API behavior returning 429 on successful call at https://aka.ms/graphfeedback?

@andrueastman andrueastman added status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close and removed Needs: Attention 👋 labels Jun 25, 2024
@MichalLechowski
Copy link
Author

I submitted feedback as requested.

As to your suggestion, I don't wanna spread retry logic into multiple spaces, it's not graph client responsibility to take care of it, it's retry handler's. Not to mention sometimes BadRequest response requires retry e.g. in case of "Folder location for this channel is not ready yet, please try again later". I suppose the real solution is to return created status when channel got created instead of errors.

I am leaving the decompression and deserialization for now, we don't care if it takes extra seconds for a background job to finish processing, but it for sure should not behave like this.

If there are any other suggestions or debugging to do, let me know, I'm willing to try anything.

@microsoft-github-policy-service microsoft-github-policy-service bot added Needs: Attention 👋 and removed status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close Status: No recent activity labels Jun 30, 2024
@andrueastman
Copy link
Member

andrueastman commented Jul 2, 2024

Thanks for getting back @MichalLechowski

As you correctly put it the best thing would be for the correct status to be returned (Thanks for filing the feedback).

To avoid having the compression logic in your custom handler, all you would need to do is

 handlers.Insert(0, customRetryHandler);

instead of

handlers[retryHandlerIndex] = customRetryHandler;

So that the default CompressionHandler can get an opportunity to do its job before the retry logic kicks in.

As there's no further action on the SDK side, we'll look to closing this issue.

@andrueastman andrueastman added status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close and removed Needs: Attention 👋 labels Jul 2, 2024
Copy link
Contributor

This issue has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: No recent activity status:waiting-for-author-feedback Issue that we've responded but needs author feedback to close type:bug A broken experience
Projects
None yet
Development

No branches or pull requests

2 participants