Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Managed Jenkins Infrastructure for TVM RFC #49

Merged
merged 6 commits into from
Jan 20, 2022
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
136 changes: 136 additions & 0 deletions rfcs/0049-managed-jenkins-infrastructure-for-tvm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# Managed Jenkins Infrastructure for TVM

- Feature Name: `managed_jenkins_infra`
- Start Date: 2022-01-03
- RFC PR: [apache/tvm-rfcs#0000](https://github.com/apache/tvm-rfcs/pull/0049)
- GitHub Issue: [apache/tvm#0000](https://github.com/apache/tvm/issues/0000)
- Pre-RFC: https://discuss.tvm.apache.org/t/pre-rfc-managed-jenkins-infrastructure-for-tvm/11692

Authored-by: [Andrew Reusch](https://github.com/areusch)(@areusch)

Authored-by: [Noah Kontur](https://github.com/konturn)(@konturn)

See also: PoC of the Infrastructure-as-Code repos:
- Ansible and Jenkins config: https://github.com/octoml/tvm-ci
- Terraform: https://github.com/octoml/tvm-ci-terraform
- Packer: https://github.com/octoml/tvm-ci-packer

## Background and Motivations

The Apache TVM project relies on Jenkins for Continuous Integration services. At present, Jenkins is maintained by a small set of folks, many of whom are core committers or who serve on the PMC. As the project grows and the maintenance burden increases, we find that it could be beneficial to the project as well as the current Jenkins maintainers to adopt a more modern, Infrastructure-as-Code approach to maintaining the fleet of machines and the web services responsible for the TVM CI.

### Architectural Overview

![Jenkins|690x396](./assets/0049/architectural-overview.png)

At a high level, the proposed architecture layout is similar to what currently exists for TVM CI; namely, a leader VM in AWS will run the Jenkins GUI and assign pipeline jobs to agent VM's. As before, the Jenkins service on the leader VM will run via docker, and the leader will assign jobs to the agents via SSH authentication. While there will certainly be some architectural difference between this setup and the old one—agents will likely be deployed in autoscaling groups, and they will likely utilize a shared cache mechanism for builds via EFS or S3—the primary differences involve how provisioning/configuration is done:

1. Packer will be used to provision baseline images for all the agent and head node VM's. These images will be stored in AWS' AMI store, and will be updated periodically when necessary.
2. Terraform will be used to manage the infrastructural components of Jenkins CI such as the head node, agent autoscaling groups, and the load balancer handling SSL termination to the Jenkins leader VM. This way, infrastructural changes can be versioned and vetted in a publicly-available repository.
3. Ansible will be used to configure the Jenkins head node, and will thus handle items like Jenkins Job configuration (e.g. how often nightly builds run) and authentication methods. As with Terraform, the Ansible code will be made publicly-available.

It will likely be the case that the Terraform and Ansible code will reside in different repositories, as they will likely utilize different deploy paradigms. The former will likely leverage [Atlantis pull request automation](https://www.runatlantis.io/), which essentially allows contributors to run and review terraform plans by issuing comments on a PR. On the other hand, the ansible playbooks used to configure Jenkins will be run using Github Actions. If it is desirable to reduce complexity, we could use the same deploy tool for both.

### Theory of Operation

Under normal conditions, the system operates as follows:

1. The Jenkins master node is configured with a Pipeline Multibranch project. The project source tree is set to the official Apache TVM GitHub repository.
2. A GitHub [webhook](https://docs.github.com/en/developers/webhooks-and-events/webhooks/about-webhooks) notifies the Jenkins master when any branch or PR is updated in the Apache TVM repository.
3. The Jenkins master schedules a build for each notification it receives.
4. When it is time to start the build (the Jenkins [quiet period](https://www.jenkins.io/blog/2010/08/11/quiet-period-feature/) expires), Jenkins notifies GitHub and executes the `Jenkinsfile` to be used for the build.
- NOTE: for PR builds, the `Jenkinsfile` used is always the one checked-in to the target merge branch (i.e. `main` for all practical purposes here). This is due to convention from the [Multibranch Pipeline plugin](https://github.com/jenkinsci/workflow-multibranch-plugin).
5. The TVM `Jenkinsfile` specifies a multi-stage build, each stage containing a set of parallel jobs which run on specific types of machines (machine types are identified from a `label` which is specified on [`node`](https://www.jenkins.io/doc/book/pipeline/syntax/#agent-parameters) lines in `Jenkinsfile`). These machine labels are also present in the TVM Jenkins master configuration. Currently, TVM CI supports these labels with these meanings:
- `CPU` - an x86_64 machine with no specific GPU requirement which can execute `ci-lint`, `ci-cpu`, `ci-wasm`, `ci-qemu`, and `ci-i386` containers
- `GPU` - an x86_64 machine with a specific GPU which can execute `ci-gpu` containers
- `GPUBUILD` - an x86_64 machine with CUDA and other GPU libraries present (such that `ci-gpu` can execute), but not necessarily with the GPU used in TVM CI unit tests. Used to build TVM and unit tests which can be run on `GPU` nodes.
- `ARM` - an AArch64 machine which can run `ci-arm` containers.
- `TensorCore` - an alias for `GPU` (historically this specified a machine with a more powerful GPU)
- `doc` - a machine which serves the last-built docs from `main`
6. Jenkins finds an **executor** machine for each job. Executors are machines running in AWS or other public clouds (e.g. public machine types in Azure, GCP, etc) which are running the Jenkins agent. Jenkins dispatches the job to the executor and awaits the results.
7. When a job in any stage fails, the build is aborted. Otherwise, the build proceeds through all stages.
8. When the build is completed, Jenkins notifies GitHub of the result, and the PR or `main` branch is updated.

### Autoscaler

Jenkins executor nodes can be classified into two groups:

1. **Static nodes** are long-lived instances managed by Terraform. The Jenkins master is configured to connect to static nodes at startup and expects them to continue to stay alive for the life of the Jenkins master process.
2. **Autoscaled nodes** are cloud instances that are created by the Jenkins master in response to PR workload. As the build queue grows longer, Jenkins can choose to create additional executors to alleviate developer wait time. Autoscaled nodes persist for an adjustable period of time after they become idle.

At launch time, we intend to use only static nodes. However, autoscaled nodes have been tested internally and we will begin to use those sometime in Q1 2022. Autoscaled nodes present a debugging challenge, as flaky tests or non-repeatable errors will need to be diagnosed before the autoscaled node is decommissioned automatically by the Jenkins master.

### Infrastructure-as-Code Repository

The production TVM CI instance will be managed using an open source Infrastructure-as-Code repository living in GitHub. All configuration except credentials will be stored in this repository. TVM Committers, plus additional delegates of those committers responsible for running the TVM Jenkins infrastructure, will be granted write access to this repository. Any changes to this repository will require review from those individuals with write access who are actively involved in the day-to-day operations of TVM CI.
areusch marked this conversation as resolved.
Show resolved Hide resolved

## Maintenance Tasks

This section describes the various maintenance tasks that may need to occur with a Managed Jenkins fleet and roughly outlines the strategy and playbook for accomplishing them. The actual playbooks will be maintained and updated in the Infrastructure-as-Code repository which automates this system.

### Updating the Jenkins software

As mentioned in the Architectural Overview above, the Jenkins service on the head node runs via docker, and the image is deployed via Ansible. Updating the Jenkins service is therefore as easy as updating the version tag on the Jenkins image and letting the Ansible pipeline deploy the new image onto the leader node. Since doing this involves restarting Jenkins, it causes running jobs to fail; to prevent disruption, worker nodes will be drained of jobs prior to deployment. This will all be done in a pre-defined maintenance window (e.g., Sunday night) as to avoid large queue times during the draining process.

### Changing the set of static nodes

As of now, technical limitations in the way the static nodes are deployed prevents configuration changes without recreating the nodes. Luckily, these changes can be applied by rolling updates; namely, the nodes can be drained and updated one at a time to avoid noticeable CI degradation. To elaborate, the update process entails making a change to the set of static nodes in Terraform and then draining and applying the changes on each node one by one.

### Making a configuration change to Jenkins

As with updating the Jenkins software, any configuration changes can be made by running and deploying the configuration changes through Ansible. As of now, most global configuration changes require a reboot of the Jenkins node, and so will likely be done during the same maintenance window mentioned above. The code will likely be retooled in the future so that these changes can be made without having to redeploy the docker image.

### Adding a new job

Jenkins Jobs are also managed through Ansible, and updates to job configuration/adding new jobs does not require Jenkins to be restarted.

## Launch Validation

### Validating the CI

This section describes how we have validated the new CI to ensure we aren't changing the test results by switching platforms. This validation process is vastly simplified by the fact that we have already been managing the executors using Terraform for 6 months. Here, validation means determining that the proposed Jenkins system produces test results which are similar enough to the one currently running in production.

There are many reasons why the two systems could differ:

1. Executor node misconfiguration
2. Jenkins master misconfiguration
3. Flaky TVM tests
4. Differences in the test environments (e.g. choosing a different target revision when merging a PR for test purposes)

We consider disagreements in test results caused by the first two reasons to be blocking, and the others to not block a launch of this system. TVM's CI testing is not always 100% reproducible due to test flakiness, and the benefit of launching this system outweighs the cost of achieving an exact match between a staging system and TVM's present production CI system.

We therefore adopt a log analysis strategy for validation like so:

1. A Python script scans the Jenkins workspace of the production Jenkins instance and a staging instance which matches the configuration proposed here. A list of pairs of build numbers, each pair associating two builds (one from production Jenkins and one from staging) which operated on the same PR or TVM revision.
2. Each build pair is considered one-by-one. The Jenkins pipeline XML is examined to determine the build result and any failing stages in TVM CI. A report is produced detailing differences between the outcome of all `sh` statements in the `Jenkinsfile`.
3. The differing entries in the report are analyzed manually and categorized into one of the above categories. Those reports which fall into a blocking launch category must be justified to avoid blocking launch (e.g. transient config change, development of staging instance, etc).

### Launch Process

TVM CI is less heavily used over weekends, so the launch process will take place on a weekend. When the launch commences, Jenkins will be configured to stop scanning PRs and we will wait for builds to complete. Once completed, the following steps will take place:

1. The production cluster will be created using the IaC pipeline
2. [`ci.tlcpack.ai`](http://ci.tlcpack.ai) will be updated to point to the new Jenkins master
Mousius marked this conversation as resolved.
Show resolved Hide resolved
3. We will smoke test several PRs to ensure the CI has basic functionality

We will not initially enable autoscaling. After a few weeks of successful operation, we will begin adding autoscaler nodes to the fleet.

## Ownership

We propose that the Infrastructure-as-Code repository for this system be open-sourced but that the maintenance be delegated to a set of volunteers in the community. IaC operations will be launched in practice from GitHub Actions inside a new e.g. `tlcpack/ci-*` repositories. Cloud credentials will be provided to the IaC repository (stored privately, accessible to those community volunteers involved with CI operations) to enable maintenance access to the fleet of nodes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part of the problem with current CI is that even as a TVM committer I can't make meaningful changes to the infrastructure. The infrastructure in itself is a part of the TVM project, I'd suggest we encourage people to contribute infrastructure-as-code similar to other contributions, by using the committer system, rather than an alternative one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i agree with this. i think there are a few more obstacles before we can do this and i'd like to solve them in parallel without blocking efforts to improve CI:

  • there isn't a path defined right now for folks who contribute only to TVM CI infrastructure to become committers
  • nothing is codified right now so we can't use the traditional path
  • there are folks who feel comfortable reviewing both Infra-as-Code and TVM, but my perception is that the number is small

what we're proposing is to handle this separately for now, however still grant TVM committers write access to the IaC repo (so the system is essentially still the committer system, just with extra folks who can write/deploy). this will also give us a good idea as to the GH permissions needed for such a repo, so that we can then consider unifying the two systems with a proper proposal later on.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's also some prior art of CI code living outside the main repo (see https://github.com/kubernetes/test-infra, https://github.com/pytorch/builder), afaik for similar reasons (easier to commit to and iterate on)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, I don't have any issue with the actual code being in a separate respository with different checks and such, that's totally normal in a lot of projects. The point of concern is taking the CI infrastructure, which every commit into TVM depends on, outside of the Apache TVM project. Taking your examples of kubernetes/test-infra and pytorch/builder, they all exist within the project itself, so the Kubernetes CI is under the kubernetes namespace and governed under those rules.

  • there isn't a path defined right now for folks who contribute only to TVM CI infrastructure to become committers
  • nothing is codified right now so we can't use the traditional path
  • there are folks who feel comfortable reviewing both Infra-as-Code and TVM, but my perception is that the number is small

I'm not sure this is true, I believe that the TVM community has a reasonable number of active committers comfortable with reviewing both, it's historically been difficult for them to contribute and continuing to manage it outside of the project seems to continue that practice. The path to becoming a committer does not seem to require comprehensive of knowledge across TVM, as the code owners file demonstrates certain committers have a large preference to a single area. I would support the PMC in guiding those who are interested in solely contributing to CI to becoming comitters as much as those who would contribute to other areas, such as documentation.


## Alternatives

### GitHub Actions

We considered using GitHub Actions to drive the TVM CI instead of Jenkins. While GitHub Actions has several attractive properties (for two, a modern configuration language and management of the "Jenkins master" equivalent), there are a couple of compelling reasons to build our own infrastructure including the Jenkins master:

1. **Maintenance of dedicated executor fleet**. TVM's build is sensitive to the type of hardware used to execute the CI. Using GitHub Actions only alleviates us of the burden of running the Jenkins master. We would still need to run our own fleet of executors with the GitHub agent.
2. **Write access to CI configuration**. GitHub Actions is configured from within the `tvm` repository. While there are many benefits to this, operationally write access to the `tvm` repository is a slow process that is currently granted based on historical contribution to TVM. This process isn't particularly impedance-matched to the needs of a DevOps team, where access checks are routine but low-overhead and the group with write permissions should be controlled but easy to change. And, it's likely that many of the maintenance tasks involved with running TVM executors require the involvement of the current group of TVM Committers—indeed, no TVM committer is on the OctoML Infrastructure team today. This is not to say that any of these things could be changed, but when this project was started, it was considered to be challenging to accommodate these requirements in the TVM committer system.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs rephrasing with the context around choosing tlcpack and using TVM Committers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i removed the last sentence, but i'd prefer to leave the rest in here actually. i'm very much okay with proceeding using the existing committer promotion strategy--i don't believe we should make a process exception over a perceived fear that the process won't work. however, i think it is pretty plain that a multi-week PMC vote is an extremely heavyweight way to add folks to an oncall rotation. i don't consider this problem perfectly solved in the new system, so i don't think it's worth removing the critique that GH actions locks us into that problem. i would like to revisit this problem in context of real experience operating the IaC repo and motivate any process changes off of that rather than off of a gut feeling. this does mean we're proceeding with a degraded oncall support, but i'm okay with that given it's a CI and the goal is to build a community-driven process.

3. **Private TVM CI instances**. While TVM CI will always remain open and public, there are multiple companies which both contribute to TVM and desire to run their own CI instance internally. Sticking to an open-source CI system avoids any vendor-specific pitfalls (e.g. anyone *could* run Jenkins internally and reference our configuration).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressing the comment from @leandron, I'll also be doing work in the coming weeks to open-source the CI components of the tvm-ci, packer, and terraform repositories, at which point it should be fairly easy for others to make contributions to the CI/contribute machines

In addition to these it'd be nice if we had a single guide on how to deploy everything (from the level of I have a head node and some static machines with a freshly provisioned Ubuntu or something), both for ourselves in the future and to enable this as more than just a possibility

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a great idea--I'll definitely look into doing this in the coming weeks.

Mousius marked this conversation as resolved.
Show resolved Hide resolved
4. **Supporting non-cloud TVM Targets**. TVM CI does not currently test against targets not available in a public cloud. We have no plans to include such targets in any CI process which may contribute a binding vote on a PR's; however, as TVM expands to target mobile and edge (e.g. iOS, Android, and microTVM-related targets), there are some good reasons to consider allowing vendors the capability to notify when a PR would break their specific build. Adding this functionality to GitHub Actions could further complicate the permissions issue contemplated above.

## Future Questions

1. With an open IaC repository, it should be possible to share sponsorship of the Jenkins executor nodes with others in the TVM community. The exact process for this, however, has yet to be defined.
2. How can we add support for testing hardware not available from cloud providers? What additional infrastructure might this require?
Binary file added rfcs/assets/0049/architectural-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.