-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow the storage to work in a Docker environment #124
Comments
This is expected behaviour, since the AWS_S3_PUBLIC_URL setting is there to serve content through a CDN. Since CDNs are pretty public, that's what you want. However, I can see your use case, which seems valid. Unfortunately, I can't change this behaviour without breaking all current library users, since AWS_S3_BUCKET_AUTH defaults to True. Making AWS_S3_PUBLIC_URL respect AWS_S3_BUCKET_AUTH would switch all existing users over to bucket auth. This is complicated by #114 I've just got back from holiday, so can't really devote any time right now to figuring out an optimal soution as I've a million work emails. Suggestions welcome. Or I'll take a look myself in a few weeks. Meanwhile, your solution seems perfectly fine. |
We have the same problem out here, it would be really nice to have a fix for this. |
You can either add a parameter like this: Or turn class BucketAuthType(Enum):
NONE = 0
PRIVATE_URLS = auto()
PRIVATE_AND_PUBLIC_URLS = auto() Then you can test the condition -- if it's if instanceof(AWS_S3_BUCKET_AUTH, bool) and AWS_S3_BUCKET_AUTH:
AWS_S3_BUCKET_AUTH = BucketAuthType.PRIVATE_URLS |
That's a good solution, but importing an enum into a django settings file is a bit wierd. Maybe make the type of |
The assumption that:
... is wrong for Docker, Amazon S3 Access Points, CNAME's straight to buckets but uploading via local endpoints, etc. Personally, an extra |
Hi @etianen,
A pleasure to talk to you. I used
django-reversion
in the past and now I just founddjango-s3-storage
very useful for my new project. I think you are a great developer and I am very happy to submit an issue here.I use to set up my projects for local development using
docker-compose
, making them as close to production as possible. I will be using MinIO (S3 compatible), so basically it looks like this (simplified):Note, that in a setup like this, the docker services communicate through their own private network with its own DNS resolution, which is not accessible from the host. So
django
can accessminio
athttp://minio:9000
, however from the host, it has to be accessed throughhttp://localhost:9000
.I thought, well, this is fine. I can just set
AWS_S3_ENDPOINT_URL
tohttp://minio:9000
andAWS_S3_PUBLIC_URL
tohttp://localhost:9000
. The problem here, is that settingAWS_S3_PUBLIC_URL
skips completely the generation of pre-signed urls, so right now it is not possible to use private buckets indocker-compose
set ups.I wonder if this should be expected behavior. If not, I can submit a PR to fix it. Your call!
For now, I am working around this limitation by creating a custom storage myself by injecting this mixin in any of the storage backends provided by this project. It is ugly but it does the job. Note that I set my own
AWS_S3_PUBLIC_BASE_URL
setting instead ofAWS_S3_PUBLIC_URL
:The text was updated successfully, but these errors were encountered: