Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory issue with v8.1.1 on macOS Monterey 12.1 on a 13" MacBook Pro M1 #12695

Closed
RobCruzo opened this issue Dec 21, 2021 · 8 comments · Fixed by #12934
Closed

Memory issue with v8.1.1 on macOS Monterey 12.1 on a 13" MacBook Pro M1 #12695

RobCruzo opened this issue Dec 21, 2021 · 8 comments · Fixed by #12934
Assignees
Milestone

Comments

@RobCruzo
Copy link

image

(Sorry, it is in German, but I guess you can figure it out anyways.)

I connected via OpenStack Swift Keystone 3 and tried to download a 100GB file.

Reading through other issues regarding memory hogging, I tried the transfer with deactivated segmented transfer option, which did not bring any improvement.

@skull-squadron
Copy link

(Phff, Deutsch ist so anglisiert. /s)

I'm using x86 Monterey (16 GiB RAM) for a 94 GiB ftp job (~100k directories, ~1m files).

It's currently at 49.58 GiB virt size, 1 GiB RSS, and 51 GiB of swap. There's minimal memory pressure otherwise.

The issue started after the initial directory scan and it's going up slightly as the job progresses.

Basic CS math says, even at 2m files with a whopping 256 bytes per file or dir structure on 64-bit platforms, it shouldn't use more than 0.5 GiB of RAM. Anything over 1 Gig for that is likely a bug. Looks like one or more memory leaks. It's difficult to say what is and is not the same bug. It probably needs profiling.

@dnisbetjones
Copy link

I did a large transfer yesterday of approx. a 100 GB collection of files & folders and noticed this runaway memory usage as well. I would stop the transfer and allow the swap usage to clear up and start again every 10-15 GB transferred.

This is Monterey 12.1 on an M1 MacBook Air.

@colinblake
Copy link

I am seeing the same on my M1 Macbook Pro running macOS 12.1 and with Cyberduck 8.2.3. I was also seeing it with 8.2.2. It is happening downloading 50-100GB from S3. For example, here is a current transfer:
image

@dkocher dkocher added this to the 8.2.4 milestone Mar 3, 2022
dkocher added a commit that referenced this issue Mar 4, 2022
Lower memory pressure when submitting many tasks with limited queue s…
@dloeckx
Copy link

dloeckx commented Apr 22, 2022

Unfortunately, I am using 8.3.2 and still encountering this error. Monterey 12.3.1, MacBook Pro (M1 Pro), Cyberduck version 8.3.2 (37449). I am downloading 10's of GB from S3, typically larger files (either 250kb or 20 MB).

@claurier
Copy link

I have a similar memory issue with version 8.4.2 on a MacBook Pro (16 inch 2019) with Catalina 10.15.7.

I need to download some backup and it just eats up all my memory until there is none. I tried to disable the Preferences > Transfers > General > Segmented downloads... with no luck

Screenshot 2022-08-11 at 23 27 55

.

@ylangisc
Copy link
Contributor

I have a similar memory issue with version 8.4.2 on a MacBook Pro (16 inch 2019) with Catalina 10.15.7.

I need to download some backup and it just eats up all my memory until there is none. I tried to disable the Preferences > Transfers > General > Segmented downloads... with no luck

Screenshot 2022-08-11 at 23 27 55

.

What's the protocol (also AWS S3?) in your case?

@claurier
Copy link

claurier commented Aug 12, 2022

I am connecting to a cloud service called Hubic to download my data. it seems to be using HTTPS through their API (like for S3 I believe).

Screenshot 2022-08-12 at 14 10 28

As a quick workaround for now, please let me know if there a version that I could test to check if it works.

@dkocher
Copy link
Contributor

dkocher commented Oct 30, 2024

Duplicate for #12695.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants