Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--experimental_remote_cache_compression causes incomplete writes #14654

Closed
BalestraPatrick opened this issue Jan 27, 2022 · 20 comments
Closed

--experimental_remote_cache_compression causes incomplete writes #14654

BalestraPatrick opened this issue Jan 27, 2022 · 20 comments
Labels
team-Remote-Exec Issues and PRs for the Execution (Remote) team untriaged

Comments

@BalestraPatrick
Copy link
Member

BalestraPatrick commented Jan 27, 2022

Description of the problem / feature request:

We upgraded to Bazel 5.0.0 and are currently trying out the --experimental_remote_cache_async. During our build, I see over 1000 messages printed out to stdout that look like this possibly caused by BulkTransferExceptions:

WARNING: Remote Cache: 3 errors during bulk transfer
com.google.devtools.build.lib.remote.common.BulkTransferException: 3 errors during bulk transfer
	at com.google.devtools.build.lib.remote.util.RxUtils$BulkTransferExceptionCollector.onResult(RxUtils.java:91)
	at io.reactivex.rxjava3.internal.operators.flowable.FlowableCollectSingle$CollectSubscriber.onNext(FlowableCollectSingle.java:94)
	at io.reactivex.rxjava3.internal.operators.flowable.FlowableFlatMapSingle$FlatMapSingleSubscriber.innerSuccess(FlowableFlatMapSingle.java:173)
	at io.reactivex.rxjava3.internal.operators.flowable.FlowableFlatMapSingle$FlatMapSingleSubscriber$InnerObserver.onSuccess(FlowableFlatMapSingle.java:342)
	at io.reactivex.rxjava3.internal.operators.single.SingleDoFinally$DoFinallyObserver.onSuccess(SingleDoFinally.java:73)
	at io.reactivex.rxjava3.internal.observers.ResumeSingleObserver.onSuccess(ResumeSingleObserver.java:46)
	at io.reactivex.rxjava3.internal.operators.single.SingleJust.subscribeActual(SingleJust.java:30)
	at io.reactivex.rxjava3.core.Single.subscribe(Single.java:4855)
	at io.reactivex.rxjava3.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onError(SingleResumeNext.java:80)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableToSingle$ToSingle.onError(CompletableToSingle.java:73)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableCreate$Emitter.tryOnError(CompletableCreate.java:91)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableCreate$Emitter.onError(CompletableCreate.java:77)
	at com.google.devtools.build.lib.remote.util.RxFutures$OnceCompletableOnSubscribe$1.onFailure(RxFutures.java:102)
	at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
	at com.google.devtools.build.lib.remote.util.RxFutures$CompletableFuture.setException(RxFutures.java:276)
	at com.google.devtools.build.lib.remote.util.RxFutures$1.onError(RxFutures.java:210)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableFromSingle$CompletableFromSingleObserver.onError(CompletableFromSingle.java:41)
	at io.reactivex.rxjava3.internal.operators.single.SingleCreate$Emitter.tryOnError(SingleCreate.java:95)
	at io.reactivex.rxjava3.internal.operators.single.SingleCreate$Emitter.onError(SingleCreate.java:81)
	at com.google.devtools.build.lib.remote.util.AsyncTaskCache$1.onError(AsyncTaskCache.java:306)
	at com.google.devtools.build.lib.remote.util.AsyncTaskCache$Execution.onError(AsyncTaskCache.java:197)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableToSingle$ToSingle.onError(CompletableToSingle.java:73)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableCreate$Emitter.tryOnError(CompletableCreate.java:91)
	at io.reactivex.rxjava3.internal.operators.completable.CompletableCreate$Emitter.onError(CompletableCreate.java:77)
	at com.google.devtools.build.lib.remote.util.RxFutures$OnceCompletableOnSubscribe$1.onFailure(RxFutures.java:102)
	at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1066)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.setFuture(AbstractFuture.java:814)
	at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:115)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
	at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:100)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.setFuture(AbstractFuture.java:814)
	at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.setResult(AbstractTransformFuture.java:224)
	at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.setResult(AbstractTransformFuture.java:202)
	at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:163)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:746)
	at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:110)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:746)
	at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:110)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:746)
	at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:110)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:746)
	at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.setResult(AbstractTransformFuture.java:247)
	at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:163)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:746)
	at com.google.devtools.build.lib.remote.util.RxFutures$CompletableFuture.set(RxFutures.java:270)
	at com.google.devtools.build.lib.remote.util.RxFutures$2.onSuccess(RxFutures.java:233)
	at io.reactivex.rxjava3.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onSuccess(SingleFlatMap.java:112)
	at io.reactivex.rxjava3.internal.operators.single.SingleUsing$UsingSingleObserver.onSuccess(SingleUsing.java:154)
	at io.reactivex.rxjava3.internal.operators.single.SingleCreate$Emitter.onSuccess(SingleCreate.java:68)
	at com.google.devtools.build.lib.remote.util.RxFutures$OnceSingleOnSubscribe$1.onSuccess(RxFutures.java:155)
	at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1080)
	at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
	at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
	at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:746)
	at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:47)
	at com.google.devtools.build.lib.remote.ByteStreamUploader$AsyncUpload$1.onClose(ByteStreamUploader.java:579)
	at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
	at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
	at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
	at com.google.devtools.build.lib.remote.logging.LoggingInterceptor$LoggingForwardingCall$1.onClose(LoggingInterceptor.java:157)
	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)
	Suppressed: java.io.IOException: write incomplete: committed_size 721 for 374 total
		at com.google.devtools.build.lib.remote.ByteStreamUploader$AsyncUpload.lambda$start$2(ByteStreamUploader.java:448)
		at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:213)
		at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:202)
		at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:118)
		... 51 more
	Suppressed: java.io.IOException: write incomplete: committed_size 144 for 125 total
		at com.google.devtools.build.lib.remote.ByteStreamUploader$AsyncUpload.lambda$start$2(ByteStreamUploader.java:448)
		at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:213)
		at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:202)
		at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:118)
		... 51 more
	Suppressed: java.io.IOException: write incomplete: committed_size 2 for 14 total
		at com.google.devtools.build.lib.remote.ByteStreamUploader$AsyncUpload.lambda$start$2(ByteStreamUploader.java:448)
		at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:213)
		at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:202)
		at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:118)
		... 51 more

Feature requests: what underlying problem are you trying to solve with this feature?

I'm testing the new --experimental_remote_cache_compression flag to speed up artifacts download and upload.

Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.

I couldn't reproduce it yet. This is a pretty big project with a lot of flags. Maybe this flag is conflicting with one of these as well:

build --remote_timeout=600s
build --remote_upload_local_results
build --remote_download_toplevel

macOS 12.1

What's the output of bazel info release?

release 5.0.0

@brentleyjones
Copy link
Contributor

Is --experimental_remote_cache_compression being used as well?

@BalestraPatrick
Copy link
Member Author

BalestraPatrick commented Jan 27, 2022

Actually the reason for this error seems to be --experimental_remote_cache_compression. From my early tests locally that flag reduced build times by 20-25%, but I see BulkTransferException in every single CI build. I've removed --experimental_remote_cache_async and only enabled --experimental_remote_cache_compression and the problem persists.

@BalestraPatrick BalestraPatrick changed the title --experimental_remote_cache_async causes incomplete writes --experimental_remote_cache_compression causes incomplete writes Jan 27, 2022
@brentleyjones
Copy link
Contributor

This is where those errors are thrown:

// Only check for matching committed size if we have completed the upload.
// If another client did, they might have used a different compression
// level/algorithm, so we cannot know the expected committed offset
long committedSize = committedOffset.get();
long expected = chunker.getOffset();
if (!chunker.hasNext() && committedSize != expected) {
String message =
format(
"write incomplete: committed_size %d for %d total",
committedSize, expected);
return Futures.immediateFailedFuture(new IOException(message));

@bduffany
Copy link
Contributor

bduffany commented Jan 27, 2022

Highlighting the relevant parts of the error:

	Suppressed: java.io.IOException: write incomplete: committed_size 721 for 374 total
	Suppressed: java.io.IOException: write incomplete: committed_size 144 for 125 total
	Suppressed: java.io.IOException: write incomplete: committed_size 2 for 14 total

I am not super familiar with the byte stream uploader (attempting to grok it now), but is it possible that the !chunker.hasNext() condition does not necessarily imply that the uploader was the one who completed the upload, as the comment states? (Could it be that chunker.hasNext() returns false in some cases when there was only 1 chunk to flush, and the server returned that the chunk already exists, returning the blob's uncompressed size as the committed size?)

Asking because those error messages look as though the server is returning uncompressed blob sizes as the committed size (e.g. a 721 byte chunk compressing to 374 bytes looks about right, and a 2 byte chunk compressing to 14 bytes looks accurate as well since the compression overhead probably requires more space than the bytes themselves).

@bduffany
Copy link
Contributor

bduffany commented Jan 27, 2022

CC @coeuvre , @benjaminp , @buchgr , @Wyverald

@brentleyjones
Copy link
Contributor

CC @bazelbuild/remote-execution

@bduffany
Copy link
Contributor

bduffany commented Jan 27, 2022

I started a draft PR which has a failing test illustrating the issue, and I am fairly certain that the fake ByteStream impl in the test adheres to protocol, but please correct me if I'm wrong: #14655

I think there is an ambiguity with the current way that the ByteStream.Write protocol is specified. The exact spec is here: https://github.com/bazelbuild/remote-apis/blob/636121a32fa7b9114311374e4786597d8e7a69f3/build/bazel/remote/execution/v2/remote_execution.proto#L256-L264

// When attempting an upload, if another client has already completed the upload
// (which may occur in the middle of a single upload if another client uploads
// the same blob concurrently), the request will terminate immediately with
// a response whose `committed_size` is the full size of the uploaded file
// (regardless of how much data was transmitted by the client). If the client
// completes the upload but the
// [Digest][build.bazel.remote.execution.v2.Digest] does not match, an
// `INVALID_ARGUMENT` error will be returned. In either case, the client should
// not attempt to retry the upload.

In summary, one of two things can happen in response to clients sending their final chunk:

  1. The server can respond with committed_size = client_uploaded_size which means that the client successfully performed the upload, or
  2. the server can respond with committed_size = "full size of the uploaded file"

But, what does "full size of the uploaded file" mean?

  • If it means "compressed size of the file, as uploaded by the other client," then the "full size" could be pretty arbitrary since there is more than one valid way to compress a file, and it's useless to return this to the client.
  • But if it means "uncompressed size of the file" (i.e. Digest.size_bytes), then the client needs to check that, once they have received a WriteResponse and they have already uploaded all local chunks, then either the WriteResponse.committed_size equals Digest.size_bytes, or it equals the total number of bytes written by the local client (chunker.getOffset()) -- because the protocol gives us no way of knowing whether the successful upload of the last chunk was due to the local client's chunk being accepted, or if it was due to some other client completing their upload.

Given that the former interpretation would be useless to clients, I imagine the latter interpretation is what was originally intended. If so, I think it implies that the current implementation of ByteStreamUploader has a bug, because it does not check whether the committed_size could equal Digest.size_bytes. The current Bazel implementation seems to have used the first interpretation, based on this comment:

              // Only check for matching committed size if we have completed the upload.
              // If another client did, they might have used a different compression
              // level/algorithm, so we cannot know the expected committed offset

CC @mostynb who may be able to help clarify the spec

@mostynb
Copy link
Contributor

mostynb commented Jan 27, 2022

You're right that WriteResponse.committed_size is kind of useless for compressed blobs, but I don't think clients need to check that value- the server should return an error status code if and only if the write failed, and the client can check that instead of the committed_size field.

@bduffany
Copy link
Contributor

@mostynb Thanks for your input!

I would be happy to send a PR so that Bazel stops checking the committed_size in the WriteResponse, so long as the folks familiar with the ByteStreamUploader don't see any problems with removing the check. I am not 100% sure whether this would a breaking change, but it seems like it wouldn't be. If anything, it might break some expectations on the remote cache side, so it might help to have input from some remote cache impl maintainers as well.

Separately, I am trying to figure out whether the compressed_size check on the Bazel side is possible for a remote cache implementation to work around in the meantime. Specifically, I'm thinking about the following edge case and not sure how to handle it:

1. Bazel sends chunk #1, FinishWrite=false
2. Bazel sends chunk #2, FinishWrite=true
3. Server receives chunk #1, parses ResourceName, sees that the digest already exists in the cache
   Server immediately returns `WriteResponse{committed_size = digest_size}`
4. Bazel receives WriteResponse from the server. Since it has already sent all chunks, it thinks it has completed
   the upload, and then asserts `committed_size == chunker.offset`. This fails.

Firstly, is this scenario is even possible? (not sure if client-streaming gRPC works the same way in Java as it does in Go, but IIUC the client does not need to wait for the server to receive and respond to each message before proceeding to send the next message)

Secondly, is the only workaround on the cache side just to wait for Bazel to upload the entire stream, regardless of whether the digest already exists in cache or not, and then return the total length of the compressed stream so that it satisfies Bazel's committed_size expectation?

@bduffany
Copy link
Contributor

@mostynb FYI, I think this issue might affect bazel-remote as well:

https://github.com/buchgr/bazel-remote/blob/a3c6189b64cff12065750073b11a116c8c1ed00a/server/grpc_bytestream.go#L415-L421

It looks like on the first WriteRequest, if the digest already exists in cache, a WriteResponse with committed_size=uncompressed_size is effectively returned.

@mostynb
Copy link
Contributor

mostynb commented Jan 27, 2022

Maybe the best we can do is update the REAPI spec to advise clients to ignore committed_size for compressed writes and to rely on the error status instead in that case?

I'm not sure how the early-exit mechanism is useful in practice actually. As you mentioned the client calls Send until it thinks it has sent all the data, and only then calls CloseAndRecv to get the WriteResponse (at least in the go bindings). At this point the client has sent all the data even if the server decided to return early. So instead of returning early the server could have discarded all the received data and just counted how much compressed data was sent and returned that number. So maybe we should instead update the REAPI spec to advise servers to do that for compressed-blobs writes instead of returning early?

@bduffany
Copy link
Contributor

instead of returning early the server could have discarded all the received data and just counted how much compressed data was sent and returned that number.

This is exactly the workaround that I'm currently implementing 👍

advise servers to do that for compressed-blobs writes instead of returning early

I think it's good advice for servers that want to be compatible with Bazel's current behavior, so it's probably worth mentioning this approach in the REAPI spec as a notable client quirk. I'm not sure whether it'd be the best advice though if/when Bazel removes the committed_size check, since I don't have a great intuition (nor any data) as to whether the early-exit mechanism provides significant savings in practice.

@mostynb
Copy link
Contributor

mostynb commented Jan 27, 2022

Let's open an REAPI issue to get more input- can you do that or would you like me to?

@bduffany
Copy link
Contributor

I'll go ahead and open one

@gregestren gregestren added team-Remote-Exec Issues and PRs for the Execution (Remote) team untriaged labels Jan 31, 2022
mostynb added a commit to mostynb/bazel-remote that referenced this issue Feb 19, 2022
This is an implementation of this REAPI spec update:
bazelbuild/remote-apis#213

Which is part of the solution to this issue:
bazelbuild/bazel#14654
mostynb added a commit to mostynb/bazel that referenced this issue Feb 19, 2022
This is an implementation of this REAPI spec update:
bazelbuild/remote-apis#213

Here's a bazel-remote build that can be used to test this change:
buchgr/bazel-remote#527

Fixes bazelbuild#14654
mostynb added a commit to mostynb/bazel that referenced this issue Feb 21, 2022
This is an implementation of this REAPI spec update:
bazelbuild/remote-apis#213

Here's a bazel-remote build that can be used to test this change:
buchgr/bazel-remote#527

Fixes bazelbuild#14654
@brentleyjones
Copy link
Contributor

@bazel-io fork 5.1

brentleyjones pushed a commit to brentleyjones/bazel that referenced this issue Feb 22, 2022
This is an implementation of this REAPI spec update:
bazelbuild/remote-apis#213

Here's a bazel-remote build that can be used to test this change:
buchgr/bazel-remote#527

Fixes bazelbuild#14654

Closes bazelbuild#14870.

PiperOrigin-RevId: 430167812
(cherry picked from commit d184e48)
Wyverald pushed a commit that referenced this issue Feb 22, 2022
This is an implementation of this REAPI spec update:
bazelbuild/remote-apis#213

Here's a bazel-remote build that can be used to test this change:
buchgr/bazel-remote#527

Fixes #14654

Closes #14870.

PiperOrigin-RevId: 430167812
(cherry picked from commit d184e48)

Co-authored-by: Mostyn Bramley-Moore <mostyn@antipode.se>
mostynb added a commit to buchgr/bazel-remote that referenced this issue Feb 23, 2022
This is an implementation of this REAPI spec update:
bazelbuild/remote-apis#213

Which is part of the solution to this issue:
bazelbuild/bazel#14654
@keith
Copy link
Member

keith commented Jun 9, 2022

Still seeing this, removing --experimental_remote_cache_compression works around it

@mostynb
Copy link
Contributor

mostynb commented Jun 10, 2022

@keith: your remote cache might require an update- is it something opensource? If so which one and which version?

@keith
Copy link
Member

keith commented Jun 10, 2022

Using Google's 🙃

@coeuvre
Copy link
Member

coeuvre commented Jun 13, 2022

cc @bergsieker

@brentleyjones
Copy link
Contributor

This will be fixed in Bazel 5.3 😍.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
team-Remote-Exec Issues and PRs for the Execution (Remote) team untriaged
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants