Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to maintain contirubted images #293

Open
repeatedly opened this issue Apr 4, 2019 · 13 comments
Open

How to maintain contirubted images #293

repeatedly opened this issue Apr 4, 2019 · 13 comments
Labels
help wanted We need your help!

Comments

@repeatedly
Copy link
Member

repeatedly commented Apr 4, 2019

We have contributed images and pending PR for new destination.
Currently, we have a problem for the maintenance.

  1. The repository authors, we, don't know the knowledge of some destinations, e.g. logzio, stackdriver, kinesis, so we can't reply the answer to some issues and questions. In addition, we can't judge the patch is correct or not for these destinations.
  2. Merging patch and releasing new images for contributed images are delayed because we need to wait the patch from contributors. This is not good for users.
  3. Some contributed images are now not maintained. There are several reasons: The original author has no response for issues, the account is changed or deleted, etc. Honestly, we want to purge such images from this repository...

We want to know how to avoid these problems. One idea is adding contributors to this repository but it increases security concern like backdoored images. Need to add only trusted users.

Any feedback is welcome.

@repeatedly repeatedly added the help wanted We need your help! label Jun 10, 2019
This was referenced Jul 18, 2019
@perk
Copy link

perk commented Sep 19, 2019

Hi @repeatedly it seems like there are three categories of images:

  1. opensource projects, like elasticsearch, kafka or just fluentd (forwarder)
  2. destinations from big players and huge userbases (like stackdriver or cloudwatch)
  3. other destinations, like papertrail, logzio or (still waiting) logsense 😉

I'd keep the first category.

Dropping 2 and 3 will make things easier to maintain for sure but will also create a barrier for newcomers.

The 2nd category is tricky because people are interested in using them but unfortunately companies may not be interested in maintaining all the projects out there (this repo included).
I'd keep them for the sake of making things easier for new users.

The least tricky is in my opinion the 3rd category - maybe there should be a policy that other destinations need to have at least two maintainers?
If then nobody is interested in resolving issues for 3rd party proprietary destinations then I think it is fair to just drop them.

I do understand why one might think about dropping 2 and 3 altogether. In such case we (@logsenseapp) will just maintain our own forked repo.

@repeatedly
Copy link
Member Author

The least tricky is in my opinion the 3rd category - maybe there should be a policy that other destinations need to have at least two maintainers?
If then nobody is interested in resolving issues for 3rd party proprietary destinations then I think it is fair to just drop them.

Yeah, if the active/trusted maintainers are available, we can support new images. Add new image is easy but remove image is hard. We don't want to avoid current problems for new images.

@elafontaine
Copy link

elafontaine commented Dec 18, 2019

Actually, we started to use this repo for the 1st category.

Now, we would like to contribute an image that would use rdkafka instead of the ruby-kafka plug-in as the latter doesn't play well with Kerberos Authentification to a kafka cluster. The rdkafka image would need some extra packages for kerberos and gssapi support. Is this still a good candidate part of the 1st category?

We would like to have this image maintained by the community so people can contribute bugfixes, but I wouldn't want to impose on the maintainer of this repo.

@elsesiy
Copy link
Contributor

elsesiy commented Mar 30, 2020

@repeatedly As someone who just submitted a new destination (category 2) I think having a single repo that is well maintained is worth a lot. If everyone creates their own forks just to add any work in category 2 or 3 then you end with a lot of friction on where images are hosted, where to file issues if it pertains the new destination as well as the upstream etc.

In my case, I contributed #411 and will also maintain it as it's something we're actively using and need to keep up-to-date. But we won't be able to fork this repo (due to security restrictions on the GH org) and don't have a publicly facing registry for other users to consume this contribution. Hence, I'm asking what are the requirements for "new/trusted maintainers"?

@repeatedly
Copy link
Member Author

what are the requirements for "new/trusted maintainers"?

Hmm... I think here are 2 approaches:

  • Add 2 or more people as a maintainer
  • Add organization backed person as a maintainer

We want to avoid no maintainer case.

@elsesiy
Copy link
Contributor

elsesiy commented Apr 1, 2020

@repeatedly Understood. In my case, it's a company-backed contribution. What do you suggest as a "proof" for the 2 approaches?

@spensireli
Copy link

I agree with @elsesiy , currently AWS points to this project. As seen Here and Here . Dropping 2 & 3 seems quite disruptive.

I'd really like to make a plugin that supports CloudWatch & S3 in one deployment, which fluentd is fully capable of. This helps reduce cost for noisy debug logs that you only want to go to s3 with more pertinent logs to cloudwatch.

@elsesiy
Copy link
Contributor

elsesiy commented Aug 3, 2020

@repeatedly Can you please get back to me on what kind of proof is needed? You've seen me contributing to this repo frequently already but I really need to finish up #411 since it's causing unnecessary overhead to maintain this on our end, thanks!

@andre2007
Copy link

@repeatedly same for me. I really miss the functionality from #411 and it would cause a lot of overhead for the company I work for.

@repeatedly
Copy link
Member Author

@elsesiy Yeah, you did lots of contribution for this repository. Could you update #411 patch to resolve the conflict?

@elsesiy
Copy link
Contributor

elsesiy commented Aug 5, 2020

@repeatedly I've updated the PR but still like to run a small test before we merge it. I'm running a modified version of the underlying plugin internally so I just want to confirm again that it works the way it is now, thanks

@eigood
Copy link

eigood commented Jun 18, 2022

Perhaps don't use in-process output plugins? Instead, each output plugin is a separate process, that runs as a sidecar. This could be patterned after the fluentd-forward plugin.

@bwinter
Copy link

bwinter commented Aug 19, 2022

I was thinking, similar to Adam, my initial thought when I found this repo was: can I add plugins to the basic fluentd docker image?

I do like the community maintained k8s configs but I think it would be easier to maintain all of this stuff if there were a simple-ish way to tell the vanilla fluentd image "look in this folder" or "install this set" of plugins (or use a sidecar). That way the community would only have to maintain the daemonset (and give some example of configurations.)

This might not work for some subset of users but it would basically remove the need for such diverse maintenance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted We need your help!
Projects
None yet
Development

No branches or pull requests

8 participants