Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misleading error message when incorrect CORS is set #217

Closed
truthsword opened this issue Dec 8, 2024 · 23 comments
Closed

Misleading error message when incorrect CORS is set #217

truthsword opened this issue Dec 8, 2024 · 23 comments

Comments

@truthsword
Copy link

truthsword commented Dec 8, 2024

Not sure what I've missed. Using B2 e2e without self-signed SSL.

Error: Get "./downloadFile?id=RJiQok3CJhCMaeD": net/http:fetch() failed: TypeError:NetworkError when attempting to fetch resource.

Log:
Sun, 08 Dec 2024 20:14:46 UTC Download: Filename Encrypted File, ID RJiQok3CJhCMaeD, Useragent Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:133.0) Gecko/20100101 Firefox/133.0

Same error using Microsoft Edge to download.

Sun, 08 Dec 2024 20:24:42 UTC Download: Filename Encrypted File, ID RJiQok3CJhCMaeD, Useragent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0

Upload seemed fine, and file seems to be at B2.

Also noticed that my download count incremented even through download failed.

@truthsword
Copy link
Author

truthsword commented Dec 10, 2024

Saw an docker image update and ran it. It had me do a complete reinstallation. When I logged in, none of my previous uploads appeared. Oh, well. Uploaded a simple text file. Decided to wait overnight before I tested its download.

Signed in this morning and immediately was asked for an encryption key. Huh? Why? All was good when I left things last night. Pasted in the key, and a new warning appeared:

That was unhelpful. Dismissed the warning and copied the URL for the text file I added yesterday. Pasted it into the browser tab, and got the error in previous post.

FWIW... I'm also not seeing an active hotlink button (greyed out).

End-to end encryption to B2. No self-signed SSL as I get HTTP/HTTPS errors when the app initializes and tries to open the sign-in webpage.

My gokapi domain runs through nginx proxy manager, if that matters. For example:
https://go.domain.co >> http://192.168.1.42:53842

@Forceu
Copy link
Owner

Forceu commented Dec 10, 2024

Thanks for the feedback! Are there any error messages in your browser's console?
Regarding the hotlinks being greyed out: When using e2e encryption, hotlinking is not possible, as the data needs to be decrypted first in the browser

@truthsword
Copy link
Author

When I try to download

@Forceu
Copy link
Owner

Forceu commented Dec 10, 2024

You need to update your CORS rules for your Backblaze bucket to allow downloads from you domain. There should be a notice when starting Gokapi that the CORS rules are not set up correctly. I should probably add an error handling for that as well

@Forceu Forceu changed the title NetworkError when attempting to fetch resource. Misleading error message when incorrect CORS is set Dec 10, 2024
@truthsword
Copy link
Author

OK... I've initially set CORS for all domains. But I'm still having an issue with the capped upload. 100MB is fine, but 200MB/400MB yields this:

My config.json includes this line:

"MaxFileSizeMB": 10240000,

Seems like that would cover 400MB?

I also used the environmental entry on my docker compose.yaml file,

environment:
      - GOKAPI_MAX_FILESIZE=10240000

but it did not allow files > 100 MB

I welcome any suggestions! Thanks!

BTW... If I log out and close the browser, when I next log in I'm always asked for the E2E code. Is this a cookie thing I keep dropping?

@Forceu
Copy link
Owner

Forceu commented Dec 11, 2024

What is the output of the docker container? That might display an error message. And are there any error messages in the browser console?

The encryption key for e2e encryption is stored in a local storage object, similar to cookies. Have you set your browser to delete them after closing the browser?

@truthsword
Copy link
Author

This was a 200 MB file. A previous 100 MB landed without an issue. A subsequent 10 kB file uploaded/downloaded as expected.

What is the output of the docker container? That might display an error message. And are there any error messages in the browser console?

docker compose logs seems fine:

The browser console output ends in a 504.

If I use the ID for the 200 MB file upload, its download button appears, yet when I click the download button, I get this:

When I look at the B2 bucket I see two 200 MB files within minutes of each other. Not sure why.

The encryption key for e2e encryption is stored in a local storage object, similar to cookies. Have you set your browser to delete them after closing the browser?

I added a cookie exception that should resolve this.

Separate question... When the last allotted download happens, the file disappears from the Gokapi web interface, yet the file seems to remain at B2 (or maybe I've not been patient enough). Is this intended?

@Forceu
Copy link
Owner

Forceu commented Dec 11, 2024

The browser console output ends in a 504.

This means Gateway timeout, have you set-up your reverse proxy to allow a higher timeout? A timeout of 300 seconds is recommended, see Nginx example. In your example the call took 90 seconds, which I assume is the cut-off. I will open a new ticket (#220) however, as it would be better, if the call did not wait for the hashing and uploading.

Separate question... When the last allotted download happens, the file disappears from the Gokapi web interface, yet the file seems to remain at B2 (or maybe I've not been patient enough). Is this intended?

Do you mean once it is deleted? You can configure your bucket through the Backblaze interface to not keep old revisions.

@truthsword
Copy link
Author

I edited my reverse proxy settings to this:

proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 30m;
send_timeout 300;

With no success. Then I tried a different approach.

In all my failed uploads, my browser URL was:
https://gokapi.domain.com::53842/admin

Out of curioisty I tried instead, the local IP
http://192.168.1.42:53842/admin

and discovered that my 200 MB and 1 GB files uploaded without error, and I was able to download them using the domain link.

This is beyond my understanding, but perhaps you understand why. It seems that the domain was limited to 100 MB, and the local docker IP followed the config.json limit.

At least I now have a method of sharing large files (>100 MB)

@Forceu
Copy link
Owner

Forceu commented Dec 12, 2024

I assume this is still a problem with your reverse proxy. Try setting

        client_max_body_size 500M;
        client_body_buffer_size 128k;

as in the example above and let me know if that makes a difference

@truthsword
Copy link
Author

I assume this is still a problem with your reverse proxy.
That may be but the 100MB file goes up without incident

Try setting

        client_max_body_size 500M;
        client_body_buffer_size 128k;

as in the example above and let me know if that makes a difference

Still an issue with 200MB. I changed browser to Edge to see if that was a factor:

@Forceu
Copy link
Owner

Forceu commented Dec 12, 2024

Error 524 is specific to cloudflare. On the free plan it limits the timeout to 100 seconds - which is interesting, as I have never had any problems with cloudflare. Once the change proposed in #220 is completed, it should solve the problem

@truthsword
Copy link
Author

truthsword commented Dec 12, 2024

OK, thanks. Still seems off that ≤ 100 MB works via Cloudflare.

I have heard that CF tunnels have a 100 MB issue, but I'm not using tunnels for this subdomain.

And I am using B2 for many large chunked backups without issue.

At least I can send large files locally.

@Forceu
Copy link
Owner

Forceu commented Dec 12, 2024

The 100MB limit should not be a factor, as the uploads are chunked in (by default) 45MB chunks. Unless you increased the chunk size to more than 100MB?

Also I could reproduce the timeouts you are getting with Cloudflare, I will push a fix for it soon.

@truthsword
Copy link
Author

Thanks!

@Forceu
Copy link
Owner

Forceu commented Dec 14, 2024

I pushed a fix, try if it works with the docker tag gokapi:latest-dev for you

@truthsword
Copy link
Author

truthsword commented Dec 14, 2024

Got this on two browsers:


When I manually enter my e2e key, the Save button is unresponsive (both browsers).

EDIT1: Starting over. Cleared out my persistent volumes. Will run setup again.

@Forceu
Copy link
Owner

Forceu commented Dec 14, 2024

When I manually enter my e2e key, the Save button is unresponsive (both browsers).

Thanks, that was a bug introduced in a previous commit. Fixed in 103fc49, the new docker image should be up in about 30 minutes

@truthsword
Copy link
Author

I'm still using the former image. I deleted everything except config.json. and started with setup. The good news is that the 200 MB file arrived in the B2 bucket. I'm going for 1 GB next, and the downloads from B2 will be tested to ensure hashes match up.

@truthsword
Copy link
Author

Good news, but new issues...

The good news is with the latest dev, I was able to upload a 9.6 GB file to B2. However, when I try to download it its starts, but fails shortly after (2 tries, 320 MB, 270 MB). No errors in docker or the logs. Browser log shows "completed".

After this happened, I took the container down docker compose down and then brought it back up... however it timed out trying to connect to B2. Repeated this 2x without B2 connection. Not sure if Gokapi will try later to establish connection.

All for now. Hoping to reconnect to B2, but the download failures are puzzling.

@Forceu
Copy link
Owner

Forceu commented Dec 16, 2024

Does this still happen? You could also try to use the proxy option in the setup / cloud config to download the files through the server, instead of client-side, maybe that prints out an error?

@truthsword
Copy link
Author

Regrets, but I'm going to bail out. I just restarted the container to retest the download, and fortunately it connected to B2, but it asked me to re-enter the encryption key, which it now rejects. I'm going to have quite the B2 bill this month from all the uploads. I'm not sure why this has been so difficult, as my primary backup software regularly uploads/dnloads 10 GB files to B2 (not S3 through). That said, I'm also not using my domain for that, so the Cloudflare affect is less certain.

I'll mention one other thing (I would have opened a new thread if this upload/dnload worked for me)… when I set a dnload limit to allow, say 2 downloads, when the second download happens, the file disappears from my Gokapi file list. This is undesirable, as

  1. the file still remains on B2 until it “expires” (presumably), so I am unable to confidently delete from B2 as its filename is encrypted, and it is not easily identified, and…
  2. I may want to extend its allowable download numbers from its present “zero” status. But as I cannot see the file entry on Gokapi, I'm unable to do that. Perhaps I missed something in the documentation that would allow me to manually delete these files, or edit their allowable downloads after their download limit was used up.

I so wanted this to work, to enable me to more easily “share” large encrypted files to a small group. Thanks for all your help. Best.

@Forceu
Copy link
Owner

Forceu commented Dec 17, 2024

I can definitely understand the decision, thank you for all the valuable feedback however!

If the option to proxy the download, the file will be deleted within one hour after the last download is complete, otherwise 24 hours after the last download started. In theory it is possible to edit an expired file up to 1 hour after expiration with the API, but this is not documented and might change in the future. There are currently no plans to let expired files linger.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants