-
-
Notifications
You must be signed in to change notification settings - Fork 551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Page rendering using s3 and cloudfront notably slow #2279
Comments
Does enabling caching on your s3 disk help? |
Have it enabled:
Takes about 6-7 seconds on local to render homepage. |
I am also having a similar problem here but I'm using Digital Ocean Spaces via the same basic s3 setup as @sg-modlab. |
Same problem here. Using a S3 compatible cloud storage (no CDN/CloudFront). Enabling S3 cache (
|
I wonder if this issue could have the same underlying cause as #2145. |
Merely checking the image field is enough to get 750 ms rendering time.
Without the above code in my Antlers template page rendering time is 120 ms. Only 6 files in the S3 folder btw. |
I noticed the |
Jonatan, your test is great to see and makes a lot of sense to what I have been experiencing the I was exploring things. Individual pages with a minimal amount s3 assets taking a very long time to render.
… On Oct 27, 2020, at 9:37 AM, Jonatan Alvarsson ***@***.***> wrote:
I noticed the .meta folder is created on the S3 storage. I guess reading those YAML files remotely from S3 is why page rendering becomes so much slower. Maybe (optionally) store the .meta directory locally for remote/S3 storage?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#2279 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAEBFK7FKLCB72WPVMYLOE3SM3EJ7ANCNFSM4QIZUI2A>.
|
This issue has not had recent activity and has been marked as stale — by me, a robot. Simply reply to keep it open and send me away. If you do nothing, I will close it in a week. I have no feelings, so whatever you do is fine by me. |
This is still a real issue. We build our own wrapper for this that stores all this information (.meta files / and if a file exists) for every glide crop thats made. This way we don't have to be making GET req. to S3. We clear all the glide crop info from Redis with cache tags if an image is modified. Is working with an S3 + CDN + Glide still something that is on the horizon 1st party? |
Re-opening. |
We have an open issue for this that's on our roadmap: #5143 |
This issue has not had recent activity and has been marked as stale — by me, a robot. Simply reply to keep it open and send me away. If you do nothing, I will close it in a week. I have no feelings, so whatever you do is fine by me. |
Please keep open kind robot |
This issue has not had recent activity and has been marked as stale — by me, a robot. Simply reply to keep it open and send me away. If you do nothing, I will close it in a week. I have no feelings, so whatever you do is fine by me. |
Bad bot! I have about 5 images on a page, that's now taking ~8 seconds to boot. |
This may have been resolved by #5143 Try setting up an s3 disk: https://statamic.dev/image-manipulation#custom-disk-cdn If it's still an issue, please provide a github repo with this problem. |
Hi @jasonvarga , I recently started a new project and decided to ditch my custom glide implementation in favour of the work that was done on #5725. Now with the separate disk & cache store for glide it's perfect! Sadly I noticed that I got added quite some load for every extra image on the page. With 7 images rolling through glide:generate I'm now up to 800/900 ms (about 100ms per image) for a server response. After some digging I ended up at Statamic\Imaging\Attribute::from. No matter if I have the entry cached, it will copy the image from the cdn to my local disk. Commenting this line will bring me back to a very speedy 50ms server response. Am I missing something in my configuration or should I go about thing differently? I'd really like to use statamics glide implementation. |
Does it still try to copy the image on the second page load? |
Yes, it does. After a glide:clear it takes longer. Thing is, this function hits Sadly this kind of defeats the purpose of this whole caching layer if we still hit s3 with a copy for every image on every req. But we do need a local image to read out width/height. Maybe we can cache these attributes as well? After glide has cropped it I don't see it changing again. |
Yeah that looks like an oversight. Seems like the image attributes are being retrieved every time. |
Are you using |
I'm using blade I think that works the same as the tag pair. I was thinking about making a little wrapper function to do something like |
Ah I see, this is because of the tag pair.. I just made a small helper for around glide:index for use in blade
No more copies from s3 now. I'm good now. That said, tag pair should probably also benefit from the caching layer, it's the purposed method for using glide with blade. |
Yeah it should work. I was just confirming a theory. It only happens when using the tag pair, because we inject attributes, which does the copying (but could be modified to avoid copying every time). |
Bug Description
Using the flysystem s3 driver and setting
url
to the cloudfront domain that was setup is working. Images are loading ... but notably slow. Glide was being used, so I removed those tags from home page in case that caused the slowness. Still notably slow. Cleared all cache and no difference.I noticed the other issue with assets list in CP being slow with s3. Seeing that as well. It is unusably slow. Similar rendering of front end using s3/cloudfront is unusably slow.
How to Reproduce
Wire up a site with s3 and cloudfront.
Extra Detail
Environment
Statamic 3.0.0 Pro
Laravel 7.25.0
PHP 7.4.9
statamic/seo-pro 2.0.7
Install method (choose one):
statamic/statamic
The text was updated successfully, but these errors were encountered: