forked from rclone/rclone
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge master #1
Merged
Merged
Merge master #1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Due to a bug/misfeature in the go standard library as described here: golang/go#48723 the go standard library binds to both IPv4 and IPv6 when passed 0.0.0.0 or ::0. This patch detects the bind address and forces the correct IP protocol. Fixes #6124 Fixes #6244 See: https://forum.rclone.org/t/issues-with-bind-0-0-0-0-and-onedrive-getting-etag-mismatch-when-using-ipv6/41379/
Elaborate exactly how server_command should be used in the configuration file
Clarify how single character remote names are interpreted in Windows (as drive letters) See: https://forum.rclone.org/t/issue-with-single-character-configuration-on-windows-with-rclone/
If the server returns the MIME type as application/octet-stream we assume it doesn't really know what the MIME type. This patch tries matching the MIME type from the file extension instead in this case. This enables the use of servers (like OneDrive for Business) which don't allow the setting of MIME types on upload and have a poor selection of mime types. Fixes #7259
Before this change, b2 would return an error when opening a link generated by `rclone link`. The following error occurs when the object path contains an ampersand that is not percent encoded: { "code": "bad_request", "message": "Bad character in percent-encoded string: 38 (0x26)", "status": 400 }
…cy override Before this change the concurrency used for an upload was rather inconsistent. - if size below `--backend-upload-cutoff` (default 200M) do single part upload. - if size below `--multi-thread-cutoff` (default 256M) or using streaming uploads (eg `rclone rcat) do multipart upload using `--backend-upload-concurrency` to set the concurrency used by the uploader. - otherwise do multipart upload using `--multi-thread-streams` to set the concurrency. This change makes the default for the concurrency used be the `--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger than the `--backend-upload-concurrency` then that will be used instead. This means that if the user sets `--backend-upload-concurrency` then it will be obeyed for all multipart/multi-thread transfers and the user can override them all with `--multi-thread-streams`. See: #7056
This adds a private and public key to the SFTP SSH test so that it works when it doesn't have access to my ssh agent!
This also fixes the integration tests which is why we didn't notice this before!
This adds a :writeback tag to upstreams. If set on a single upstream then it writes back objects not found into that upstream. Fixes #6934
* cmd: refactor and use sysdnotify in more commands Fixes #5117
Added rc support for the flags recently introduced in #6971. createEmptySrcDirs ignoreListingChecksum resilient
Before this change, bisync ignored the dryRun parameter (only when specified via the rc.) This change fixes the issue, so that the dryRun rc parameter is equivalent to the --dry-run flag.
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](actions/checkout@v3...v4) --- updated-dependencies: - dependency-name: actions/checkout dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com>
ChangeNotify has been broken on the compress backend for a long time! Before this change it was wrapping the file names received rather than unwrapping them to discover the original names. It is likely ChangeNotify was working adequately though for users as the VFS just uses the directories rather than the file names.
Sometimes opendrive reports "403 Folder is already deleted" on directories which should exist. This might be a bug in opendrive or in rclone however we work-around here sufficient to get the tests passing.
Storj requires a minimum duration of 1 minute for the link expiry so increase what we are asking for from 1 minute to 2 minutes.
Zoho are now responding to Range requests properly. The remnants of our old workaround was breaking the integration tests so this removes them.
For uploads which are coming from disk or going to disk or going to a backend which doesn't need to seek except for retries this doesn't buffer the input. This dramatically reduces rclone's memory usage. Fixes #7350
The uptobox service hasn't running since 20 September 2023. This removes it from the integration tests to save noise.
The free account has a very ungenerous 1000 api calls per day limit and the full integration test suite breaches that so limit the integration tests to just the backend.
This commit fixed the problem but made the integration tests fail. 33376bf dropbox: fix missing encoding for rclone purge This fixes the problem properly by making sure we send the encoded or non encoded root to the right places.
Apparently gcs doesn't return an S3 compatible result when using versions. In particular it doesn't return a NextKeyMarker - this means rclone loops and fetches the same page over and over again. This patch detects the problem and stops the infinite retries but it doesn't fix the underlying problem. See: https://forum.rclone.org/t/list-s3-versions-files-looping-bug/42974 See: https://issuetracker.google.com/u/0/issues/312292516
Without this, requests like PROPFIND, issued from a browser, fail.
…ails Before this change, if a multithread upload failed (let's say the source became unavailable) rclone would finalise the file first before aborting the transfer. This caused the partial file to be written which would overwrite any existing files. This was fixed by making sure we Abort the transfer before Close-ing it. This updates the docs to encourage calling of Abort before Close and updates writerAtChunkWriter to make sure that works properly. This also reworks the tests to detect this and to make sure we upload and download to each multi-thread capable backend (we were only downloading before which isn't a full test). Fixes #7071
…-cutoff Before this change the b2 servers would complain as this was only a single part transfer. This was noticed by the new integration tests for server side chunked copy.
Before this change, streaming files an exact multiple of the chunk size would cause rclone to attempt to stream a 0 sized chunk which was rejected by the b2 servers. This bug was noticed by the new integration tests for chunked streaming.
This puts in a workaround for the tests also
This is a workaround to make the new multipart upload integration tests pass.
The following command will block for 60s(default) when the network is slow or unavailable: ``` rclone --contimeout 10s --low-level-retries 0 lsd dropbox: ``` This change will make it timeout after the expected 10s. Signed-off-by: rkonfj <rkonfj@gmail.com>
…7455 Before this change serve s3 would return NoSuchKey errors when a non existent prefix was listed. This change fixes it to return an empty list like AWS does. This was discovered by the full integration tests.
Before this change overwriting an existing file with a 0 length file didn't update the file size. This change corrects the issue and makes sure the file is truncated properly. This was discovered by the full integration tests.
…2/ fork Before this change smb drives sometimes showed a fraction of the correct size using `rclone about`. This fixes the problem by switching the upstream library from github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which has a fix for the problem. The new library passes the integration tests. Fixes #6733
Before this change PartialUploads was not set. This is clearly wrong since incoming files are visible on the smb server. Setting PartialUploads fixes the multithread upload modtime problem as it uses the PartialUploads flag as an indication that it needs to set the modtime explicitly. This problem was detected by the new TestMultithreadCopy integration tests Fixes #7411
Before this change ListR was unconditionally enabled on onedrive. This caused performance problems for some uses, so now the --onedrive-delta flag has to be supplied. Fixes #7362
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What is the purpose of this change?
Was the change discussed in an issue or in the forum before?
Checklist