-
-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Misleading error message when incorrect CORS is set #217
Comments
Saw an docker image update and ran it. It had me do a complete reinstallation. When I logged in, none of my previous uploads appeared. Oh, well. Uploaded a simple text file. Decided to wait overnight before I tested its download. Signed in this morning and immediately was asked for an encryption key. Huh? Why? All was good when I left things last night. Pasted in the key, and a new warning appeared: That was unhelpful. Dismissed the warning and copied the URL for the text file I added yesterday. Pasted it into the browser tab, and got the error in previous post. FWIW... I'm also not seeing an active hotlink button (greyed out). End-to end encryption to B2. No self-signed SSL as I get HTTP/HTTPS errors when the app initializes and tries to open the sign-in webpage. My gokapi domain runs through nginx proxy manager, if that matters. For example: |
Thanks for the feedback! Are there any error messages in your browser's console? |
You need to update your CORS rules for your Backblaze bucket to allow downloads from you domain. There should be a notice when starting Gokapi that the CORS rules are not set up correctly. I should probably add an error handling for that as well |
OK... I've initially set CORS for all domains. But I'm still having an issue with the capped upload. 100MB is fine, but 200MB/400MB yields this: My config.json includes this line:
Seems like that would cover 400MB? I also used the environmental entry on my docker compose.yaml file,
but it did not allow files > 100 MB I welcome any suggestions! Thanks! BTW... If I log out and close the browser, when I next log in I'm always asked for the E2E code. Is this a cookie thing I keep dropping? |
What is the output of the docker container? That might display an error message. And are there any error messages in the browser console? The encryption key for e2e encryption is stored in a local storage object, similar to cookies. Have you set your browser to delete them after closing the browser? |
This means Gateway timeout, have you set-up your reverse proxy to allow a higher timeout? A timeout of 300 seconds is recommended, see Nginx example. In your example the call took 90 seconds, which I assume is the cut-off. I will open a new ticket (#220) however, as it would be better, if the call did not wait for the hashing and uploading.
Do you mean once it is deleted? You can configure your bucket through the Backblaze interface to not keep old revisions. |
I edited my reverse proxy settings to this:
With no success. Then I tried a different approach. In all my failed uploads, my browser URL was: Out of curioisty I tried instead, the local IP and discovered that my 200 MB and 1 GB files uploaded without error, and I was able to download them using the domain link. This is beyond my understanding, but perhaps you understand why. It seems that the domain was limited to 100 MB, and the local docker IP followed the config.json limit. At least I now have a method of sharing large files (>100 MB) |
I assume this is still a problem with your reverse proxy. Try setting
as in the example above and let me know if that makes a difference |
Still an issue with 200MB. I changed browser to Edge to see if that was a factor: |
Error 524 is specific to cloudflare. On the free plan it limits the timeout to 100 seconds - which is interesting, as I have never had any problems with cloudflare. Once the change proposed in #220 is completed, it should solve the problem |
OK, thanks. Still seems off that ≤ 100 MB works via Cloudflare. I have heard that CF tunnels have a 100 MB issue, but I'm not using tunnels for this subdomain. And I am using B2 for many large chunked backups without issue. At least I can send large files locally. |
The 100MB limit should not be a factor, as the uploads are chunked in (by default) 45MB chunks. Unless you increased the chunk size to more than 100MB? Also I could reproduce the timeouts you are getting with Cloudflare, I will push a fix for it soon. |
Thanks! |
I pushed a fix, try if it works with the docker tag |
Thanks, that was a bug introduced in a previous commit. Fixed in 103fc49, the new docker image should be up in about 30 minutes |
I'm still using the former image. I deleted everything except config.json. and started with setup. The good news is that the 200 MB file arrived in the B2 bucket. I'm going for 1 GB next, and the downloads from B2 will be tested to ensure hashes match up. |
Good news, but new issues... The good news is with the latest dev, I was able to upload a 9.6 GB file to B2. However, when I try to download it its starts, but fails shortly after (2 tries, 320 MB, 270 MB). No errors in docker or the logs. Browser log shows "completed". After this happened, I took the container down All for now. Hoping to reconnect to B2, but the download failures are puzzling. |
Does this still happen? You could also try to use the proxy option in the setup / cloud config to download the files through the server, instead of client-side, maybe that prints out an error? |
Regrets, but I'm going to bail out. I just restarted the container to retest the download, and fortunately it connected to B2, but it asked me to re-enter the encryption key, which it now rejects. I'm going to have quite the B2 bill this month from all the uploads. I'm not sure why this has been so difficult, as my primary backup software regularly uploads/dnloads 10 GB files to B2 (not S3 through). That said, I'm also not using my domain for that, so the Cloudflare affect is less certain. I'll mention one other thing (I would have opened a new thread if this upload/dnload worked for me)… when I set a dnload limit to allow, say 2 downloads, when the second download happens, the file disappears from my Gokapi file list. This is undesirable, as
I so wanted this to work, to enable me to more easily “share” large encrypted files to a small group. Thanks for all your help. Best. |
I can definitely understand the decision, thank you for all the valuable feedback however! If the option to proxy the download, the file will be deleted within one hour after the last download is complete, otherwise 24 hours after the last download started. In theory it is possible to edit an expired file up to 1 hour after expiration with the API, but this is not documented and might change in the future. There are currently no plans to let expired files linger. |
Not sure what I've missed. Using B2 e2e without self-signed SSL.
Error: Get "./downloadFile?id=RJiQok3CJhCMaeD": net/http:fetch() failed: TypeError:NetworkError when attempting to fetch resource.
Log:
Sun, 08 Dec 2024 20:14:46 UTC Download: Filename Encrypted File, ID RJiQok3CJhCMaeD, Useragent Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:133.0) Gecko/20100101 Firefox/133.0
Same error using Microsoft Edge to download.
Sun, 08 Dec 2024 20:24:42 UTC Download: Filename Encrypted File, ID RJiQok3CJhCMaeD, Useragent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0
Upload seemed fine, and file seems to be at B2.
Also noticed that my download count incremented even through download failed.
The text was updated successfully, but these errors were encountered: