Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] S3 browser caching #864

Closed
LittleFox94 opened this issue Sep 28, 2022 · 7 comments
Closed

[feature] S3 browser caching #864

LittleFox94 opened this issue Sep 28, 2022 · 7 comments
Labels
enhancement New feature or request

Comments

@LittleFox94
Copy link
Contributor

Current S3 handling prevents any caching of media attachments, because we have a new URL every time we request a piece of media (as it includes a signature for auth/authz). This is especially problematic with browsers.

There is at least one way to work around this problem: instead of generating a URL that is valid "for $x minutes starting now", we could generate one that is valid until a semi-fixed timestamp. That semi-fixed timestamp could be something like "next hour from now, minute 42 second 23", giving us an URL that is valid for one hour and does not change, regardless of the time when you get it. See here for some more: https://stackoverflow.com/a/56543573/2537210

Proposal

When media is requested by a local, authenticated user, we hand out signed URLs for a semi-fixed timestamp of "$tomorrow $fixed-time". This URL is stored in database and every local authenticated user requesting that media gets redirected to this URL.

With this we would have a up-to-24h cache, much better than no cache at all.

@tsmethurst
Copy link
Contributor

That seems sensible!

@NyaaaWhatsUpDoc
Copy link
Member

I started brainstorming this and i don't think we even need it to be handled by the database. just a ttl cache of presigned fetch urls would do the trick

@LittleFox94
Copy link
Contributor Author

Where would that cache be stored?

@NyaaaWhatsUpDoc
Copy link
Member

In memory. We don't support running as a cluster so it's not an issue. Worst case it gets restarted, lose the cache and regenerate them again when necessary.

Or is there something I'm not seeing here?

@LittleFox94
Copy link
Contributor Author

We don't support running as a cluster so it's not an issue.

Oh, we don't? What's preventing that if I have the database and media storage external (postgresql and S3)?

@NyaaaWhatsUpDoc
Copy link
Member

A few things: mainly caching, and the way the go-fed/activity library is designed and how we have then implemented ourselves around it. We'd have to rearchitect a lot of things to support it

@tsmethurst tsmethurst added the enhancement New feature or request label Oct 2, 2022
theSuess added a commit to theSuess/gotosocial that referenced this issue Dec 2, 2022
Implements superseriousbusiness#864 and should speed up s3 based installations by a lot.

With more static urls, we can then also implement superseriousbusiness#1026 for even
better performance when used in conjunction with CDNs
NyaaaWhatsUpDoc pushed a commit that referenced this issue Dec 2, 2022
Implements #864 and should speed up s3 based installations by a lot.

With more static urls, we can then also implement #1026 for even
better performance when used in conjunction with CDNs
@tsmethurst
Copy link
Contributor

closed by #1194

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants