-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long Scan Times from Additional HTTP Requests #538
Comments
Are you downloading normal or 'hidden' blogs? What are your settings? Any other relevant information? |
Well, the missing information was that the already downloaded files were downloaded for another blog and not for the scanned one. We'll change it to check not only the current blog but also all other blogs for duplicates in this case. |
- For embedded photos it only checked the current index file for duplicates. - Now all index files and archives are checked, if enabled.
The issue has been fixed and closed. You can still comment. Feel free to ask for reopening the issue if needed. |
Describe the bug
I noticed that TumblThree app scan times are much higher than expected for blogs with duplicates and decided to look into this.
The TumblThree app seems to be sending a HTTP request to ".media.tumblr.com/" for each duplicate found, creating a large amount of additional HTTP requests. The initial json response "/api/read/json?debug=1&num=..." seems to have a unique file reference ID that could be pulled from "regular-body". Greatly reducing the number of requests needed to complete the scan and reducing the server load. You can replicate this by enabling "force rescan" and using any HTTP logger of your choice. This issue impacts rescan, reblogs, duplicates, etc and I think this would be useful for a lot of users. Sadly I don't have the coding background to fix this myself, which is why I am raising this issue.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Fast scan times with only the json file if content is duplicates.
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: