From 712f2161f3a695e9794515e18159ece637a6033c Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Wed, 26 Jun 2024 14:24:02 +0100 Subject: [PATCH 1/9] Minor changes to concept docs - sentence flow, headings, capitalisation --- .../problem-and-solution.html.md.erb | 100 +++++++++--------- .../environments/security.html.md.erb | 2 +- .../environments/single-sign-on.html.md.erb | 9 +- 3 files changed, 57 insertions(+), 54 deletions(-) diff --git a/source/concepts/environments/problem-and-solution.html.md.erb b/source/concepts/environments/problem-and-solution.html.md.erb index 01da1b639..03ff36d64 100644 --- a/source/concepts/environments/problem-and-solution.html.md.erb +++ b/source/concepts/environments/problem-and-solution.html.md.erb @@ -30,43 +30,43 @@ For example, if we had an account for `fictional-business-unit-production` and s `example-b`, which is an application with a database that holds sensitive `dataset B` -### Issue 1: blast radius +### Issue 1: Blast radius -One of the risks of splitting AWS accounts at the granularity of business unit and their SDLC is the blast radius. By doing this, the blast radius has a much wider impact on the business and has a high probability of affecting other resources and applications that sit within an AWS account. +One of the risks of splitting AWS accounts at the granularity of business unit and their SDLC is the potential size of the blast radius - the radius in which damage could occur should something go wrong. At this level of granularity, the blast radius has a much wider impact on the business and has a high probability of affecting other resources and applications that sit within an AWS account. The blast radius can be affected by anything, such as security, or an Availability Zone going offline. -#### Security example +#### Example: Compromise of security -Access keys are leaked for an IAM user that is in `fictional-business-unit-production`. The IAM user has an `AdministratorAccess` policy attached. A malicious attacker with these keys now has the ability to access both sensitive datasets: `dataset A` and `dataset B`. +Access keys are leaked for an IAM user in `fictional-business-unit-production`. The IAM user has an `AdministratorAccess` policy attached. A malicious attacker with these keys now has the ability to access both sensitive datasets: `dataset A` and `dataset B`. -#### High-availability example +#### Example: Loss of availability `fictional-business-unit-production` had only ever configured their EC2 instances to run in one AZ. If this AZ goes down, all of their applications do too. -### Issue 2: team isolation +### Issue 2: Team isolation Similar to Issue 1, team isolation becomes extremely difficult to maintain if all applications in their SDLC stage are within one account. Imagine `team-a` works on `example-a` and `team-b` works on `example-b`, both of which fall under `fictional-business-unit`. -Everything `team-a` does can be seen by `team-b` even if you use a well-defined [attribute-based access control](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) (ABAC). Using ABAC requires resources to be tagged, and it has no effect on untagged resources. ABAC also doesn't work for all resources in AWS. +Everything `team-a` does can be seen by `team-b` even if you use a well-defined [attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html). Using ABAC requires resources to be tagged, and it has no effect on untagged resources.ABAC also doesn't work for all resources in AWS. -If `team-a` forgets to tag something, or creates a resource that doesn't support tagging during creation, it's interactable by who has access to that AWS account. +If `team-a` forgets to tag something, or creates a resource that doesn't support tagging during creation, anyone who has access to that AWS account will be able to interact with it. -#### An extreme misclicking example +#### Example: Human error (misclick) -`team-a` notices an issue with one of their `production` EC2 instances. They try to terminate it, and notice that they've just terminated a `production` instance that belonged to `team-b`, which disrupts `team-b`'s workflow whilst they redeploy their application. It takes time and money to reprovision these resources. +`team-a` notices an issue with one of their `production` EC2 instances. They try to terminate it, and notice that they've just terminated a `production` instance that belonged to `team-b`, which disrupts `team-b`'s workflow whilst they redeploy their application. It takes time and money to reprovision these resources, and may affect application users. -### Issue 3: billing +### Issue 3: Billing -In a shared AWS account for a business unit, untaggable billable items such as Data Transfer become impossible to attribute to an application. +In a shared AWS account for a business unit, untaggable billable items such as Data Transfer cannot easily be attributed to a particular application. Billing granularity requires resource tagging, and it is useful to know how much an application costs to run to help prioritise the refactoring, replacement, or retirement of each application. -Further to this, having more granular billing can expose expensive operations or misconfigured resources such as instances that require right-sizing. +Further to this, having more granular billing can expose opportunities for optimisation: expensive operations or misconfigured resources such as instances that require right-sizing. -#### Example +#### Example: Unbalanced untaggable billing `example-a` stores 100TB of data in an S3 bucket and replicates it to another region, costing $2,048.00. @@ -74,43 +74,43 @@ Further to this, having more granular billing can expose expensive operations or Since Data Transfer isn't taggable, you can't easily trace it back to the originating S3 bucket. -### Issue 4: cloud waste +### Issue 4: Cloud waste -When holding all applications in a SDLC account, it becomes difficult to retire application infrastructure, even if they're covered by a good tagging policy. With cloud infrastructure, everything should be considered ephemeral and everything should be rebuildable from code. +When holding all applications in a SDLC account, it becomes difficult to retire application infrastructure, even if they're covered by a good tagging policy. When deploying to the cloud, all infrastructure should be considered ephemeral and everything should be rebuildable from code. -If you destroy the wrong infrastructure when retiring an application, it becomes costly to recreate. If you miss the deprovisioning of some infrastructure, you create cloud waste which is also costly. +If you destroy the wrong infrastructure when retiring an application, it becomes costly to recreate. If you accidently miss some infrastructure during de-provisioning, you inadvertently create ongoing cloud waste which is also costly. -Unless you've retired all applications within a shared AWS account, you can't close it, so there's always a chance there are some resources unaccounted for after an application has been refactored, replaced, or retired. +Until you've retired all applications within a shared AWS account, you can't close it, so there's always a chance there are some resources left unaccounted for after an application has been refactored, replaced, or retired. -#### Example of cloud waste +#### Example: Cloud waste from uncertainty `example-a` has been retired, and is no longer of use. There are untagged resources that are thought to be part of `example-a`, but `team-a` is unsure, so they just leave it. #### Example of destroying the wrong infrastructure -See [An extreme misclicking example](#an-extreme-misclicking-example). +See [Example: Human error (misclick)](#example-human-error-misclick). ## What we investigated -### Let teams decide +### Option A: Let teams decide -In the Modernisation Platform, we want to empower teams and give them the autonomy to do whatever they need to support their application and environments. One of the issues with delegating how their infrastructure is separated is that it: +In the Modernisation Platform, we want to empower teams and grant them the autonomy to do whatever they need to support their applications and environments. Some complexities that can arise from delegating how infrastructure is separated are: -- quickly becomes hard to track -- can become messy -- doesn't support the central alignment we're trying to achieve - people tend to go for the easiest option, which creates technical debt further down the line +- infrastructure quickly becomes hard to track +- it can become tangled and messy +- doesn't support the central alignment we're trying to achieve -Of that, there are some positives: +However, raw autonomy does give some positives: - teams truly have autonomy and empowerment to run their infrastructure in the way they believe is best - teams can understand their own configuration as they are the ones who built it -### One account, strong attribute-based access controls +### Option B: One account, strong attribute-based access controls Having one account is by far the easiest setup to have. You can use VPCs out of the box to enable cross-resource communication, you only have to set up IAM users once, and you can easily see all resources in one place. -Early on, we explored using strong [attribute-based access control](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) (ABAC) so teams can only view their own resources in an account, alongside strong IAM policies to stop cross-team resource interaction. Some of the issues with this is that: +Early on, we explored using strong [attribute-based access control](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) (ABAC) so teams can only view their own resources in an account, alongside strict IAM policies to stop cross-team resource interaction. Some of the issues with this is that: - ABAC isn't supported on all AWS resources, and requires resources to be tagged (it doesn't work on untagged resources) - finding the cost of an application becomes extremely hard to do, where billable items like Data Transfer aren't easily attributable directly to a source @@ -118,32 +118,32 @@ Early on, we explored using strong [attribute-based access control](https://docs - resources aren't truly isolated, as they're all in one account - the blast radius is huge -### Separate by application and SDLC +### Option C: Accounts separated by business unit and SDLC stage -Separating accounts by application is the more granular middle-ground of either having [one account](#one-account-strong-attribute-based-access-controls), or [separating everything](#separate-everything). It's very similar to [separating by business unit and SDLC](#separate-by-business-unit-and-sdlc) but goes a step further to provide granularity at an application level. - -It has some issues: - -- it's not simple to do -- it's uncommon -- you probably need an essence of centralisation, such as for networking -- it has a higher risk of cloud waste, but is easier to track - -### Separate by business unit and SDLC - -This is how most multi-account architectures are set up and it is recommended by most cloud providers to go to _at least_ this level of granularity. Some issues with this are listed above in [What we're trying to fix](#what-we-39-re-trying-to-fix). +This is how most multi-account architectures are set up and it is recommended by most cloud providers to go to _at least_ this level of granularity. Some issues with this are listed above in [What we're trying to fix](#what-were-trying-to-fix). It's not incorrect or wrong to do things this way, but the Modernisation Platform would like to improve on the Ministry of Justice's current working practices. It does have its benefits: -- there are a set number of SDLC stages a business unit will use +- there are a set of SDLC stages a business unit will typically use - it's easy to work with, since everything for each SDLC stage is in one place - it logically separates business units from each other -### Separate everything +### Option D: Accounts separated by application and SDLC stage + +Separating accounts by application is a more granular middle-ground of either having [one account](#one-account-strong-attribute-based-access-controls), or [separating everything](#option-e-separate-everything). It's very similar to [separating by business unit and SDLC stage](#option-c-accounts-separated-by-business-unit-and-sdlc-stage) but goes a step further to provide granularity at an application level. + +It has some issues: + +- it's not simple to do +- it's an uncommon pattern +- users probably need some degree of centralisation, such as for networking +- it has a higher risk of cloud waste, but is easier to track + +### Option E: Separate everything -Theoretically and technically, you can separate _everything_ into different accounts. You could have something like this: +Theoretically and technically, you can separate _everything_ into different accounts; i.e. application, application component/layer, and SDLC stage. You could have something like this for the `example A` application: ``` example-a-api-production @@ -157,7 +157,7 @@ example-a-frontend-development example-a-landing-zone ``` -The issue with this is: +Some issues with this are: - it becomes incredibly complex to maintain - it becomes difficult to track @@ -170,13 +170,13 @@ The issue with this is: The benefits of this are: - it becomes easy to track costs per application and layer (frontend, backend) -- it has a tiny blast radius +- each account has a tiny blast radius ## What we decided ### Overview -We decided to use separate AWS accounts per application as a middle-ground between [separating everything](#separate-everything) and using [one account with strong attribute-based access controls](#one-account-strong-attribute-based-access-controls). +We decided to use [separate AWS accounts per application](#option-d-accounts-separated-by-application-and-sdlc-stage) as a middle-ground between [separating everything](#option-e-separate-everything) and using [one account with strong attribute-based access controls](#option-b-one-account-strong-attribute-based-access-controls). Whilst there is a trade-off to more complexity, we feel it's outweighed by the benefits of doing this. @@ -189,9 +189,11 @@ Some of the biggest benefits are: ### Logically separate applications -The Modernisation Platform is going to host a number of applications, which are built in different ways. One of our goals is to modernise these applications, and that can be anything from moving away from on-premise hosted databases into managed services such as AWS [Relational Database Service](https://aws.amazon.com/rds/); or moving away from bastion hosts to [agent-based instance management](https://aws.amazon.com/about-aws/whats-new/2018/09/introducing-aws-systems-manager-session-manager/). +The Modernisation Platform hosts a number of applications, each built in different ways. One of our goals is to facilitate modernisation of these applications through the onboarding process. That could mean: +- moving away from on-premise hosted databases into managed services such as AWS [Relational Database Service (RDS)](https://aws.amazon.com/rds/) +- moving away from bastion hosts to [agent-based instance management](https://aws.amazon.com/about-aws/whats-new/2018/09/introducing-aws-systems-manager-session-manager/) -Whilst that is our goal, we're also aware it won't happen straight away, or can't happen for some legacy applications. +Whilst app modernisation is our goal, we're also aware that it won't happen straight away, and may not be possible for some legacy applications. By separating applications out into their own AWS accounts we can: diff --git a/source/concepts/environments/security.html.md.erb b/source/concepts/environments/security.html.md.erb index d2388057f..7e47a2021 100644 --- a/source/concepts/environments/security.html.md.erb +++ b/source/concepts/environments/security.html.md.erb @@ -39,7 +39,7 @@ We can see an overview of compliance across the Modernisation Platform and we wi ## Regional restrictions -We restrict the regional usage of accounts that sit within the Modernisation Platform. We use a [Service Control Policy](https://github.com/ministryofjustice/aws-root-account/blob/main/terraform/organizations-service-control-policies.tf#L40) to do this. +We restrict the regional usage of accounts that sit within the Modernisation Platform. We use a [Service Control Policy (SCP)](https://github.com/ministryofjustice/aws-root-account/blob/main/terraform/organizations-service-control-policies.tf#L40) to do this. In accordance with the [Security Guidance](https://ministryofjustice.github.io/security-guidance/baseline-aws-accounts/#regions), you should only use EU AWS regions. diff --git a/source/concepts/environments/single-sign-on.html.md.erb b/source/concepts/environments/single-sign-on.html.md.erb index 67f118b8e..e82235484 100644 --- a/source/concepts/environments/single-sign-on.html.md.erb +++ b/source/concepts/environments/single-sign-on.html.md.erb @@ -18,7 +18,7 @@ review_in: 6 months ## Introduction -We don't want to have to do identity management (joiners, movers, leavers) in the Modernisation Platform. To avoid this we use AWS single sign on (SSO), with AuthO (authentication and authorization as a service) and our GitHub organisation teams to manage access to environments. +We don't want to have to do identity management (joiners, movers, leavers) in the Modernisation Platform. To avoid this we use AWS single sign on (SSO), with AuthO (authentication and authorization as a service) and our GitHub Organisation Teams to manage access to environments. ## Diagram @@ -30,14 +30,15 @@ We don't want to have to do identity management (joiners, movers, leavers) in th - Users access the SSO login portal via the link [https://moj.awsapps.com/start](https://moj.awsapps.com/start). This URL is hosted via the AWS SSO component. - AWS SSO is configured to use Auth0 as an application and sets the associated Application ACS URL. Auth0 will be the primary authentication endpoint providing the SSO with GitHub via SAML 2.0. - - AWS SSO redirects users to an Auth0 SSO URL login page. Auth0 is configured to used GitHub as its IDP (Identity Provider) and prompts users to authenticate using their GitHub credentials. If authentication is successful (or if the user is already authenticated on Auth0, this step will be skipped) Auth0 sends an encoded SAML response to the browser. + - The SAML Assertion Consumer Service (ACS) URL is [used to identify where the service provider accepts SAML assertions](https://mojoauth.com/glossary/saml-assertion-consumer-service/#:~:text=A%20SAML%20Assertion%20Consumer%20Service,the%20identity%20provider%20(IdP).). + - AWS SSO redirects users to an Auth0 SSO URL login page. Auth0 is configured to used GitHub as its IdP (Identity Provider) and prompts users to authenticate using their GitHub credentials. If authentication is successful (or if the user is already authenticated on Auth0, this step will be skipped) Auth0 sends an encoded SAML response to the browser. - The browser sends the SAML response (SAML Assertion) to AWS SSO (service provider for verification). Once verified, the user is able to login to the AWS SSO portal. ### 2. System for Cross-domain Identity Management (SCIM) SSO - AWS SSO provides support for SCIM v2.0 standard. SCIM keeps your AWS SSO identities in sync with identities from your IdP (GitHub). - - A [scheduled Lambda job (index.js)](https://github.com/ministryofjustice/moj-terraform-scim-github) is used for SCIM provisioning from GitHub. A nodejs script uses the the GitHub API package Octokit to sync GitHub Groups and users to AWS SSO. It does this by calling the AWS SSO SCIM endpoint. - - SCIM will populate AWS SSO Groups and users with the GitHub data. + - A [scheduled Lambda job (index.js)](https://github.com/ministryofjustice/moj-terraform-scim-github) is used for SCIM provisioning from GitHub. A Node.js script uses the the GitHub API package Octokit to sync GitHub Groups and Users to AWS SSO. It does this by calling the AWS SSO SCIM endpoint. + - SCIM will populate AWS SSO Groups and Users with the GitHub data. ### 3. SSO Permission Sets From 4d61e16d65d874a6b1a9a107a0a8c411d5ff1d99 Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Wed, 26 Jun 2024 14:52:26 +0100 Subject: [PATCH 2/9] add user guide link checker --- .github/workflows/gh-pages-test-links.yml | 28 +++++++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 .github/workflows/gh-pages-test-links.yml diff --git a/.github/workflows/gh-pages-test-links.yml b/.github/workflows/gh-pages-test-links.yml new file mode 100644 index 000000000..c7419f764 --- /dev/null +++ b/.github/workflows/gh-pages-test-links.yml @@ -0,0 +1,28 @@ +--- + name: check user guide links + + on: + pull_request: + paths: + - "source/**" + + permissions: {} + jobs: + check-links: + name: Test + runs-on: ubuntu-latest + permissions: + contents: read + steps: + - name: Checkout + id: checkout + uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6 + + - name: Lychee + id: lychee + uses: lycheeverse/lychee-action@2b973e86fc7b1f6b36a93795fe2c9c6ae1118621 # v1.10.0 + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + with: + args: --verbose --no-progress './**/*.md' './**/*.html' './**/*.erb' --exclude-loopback --accept 403,200,429 + fail: true \ No newline at end of file From 8a31b22fc1f09f132ad2307c6903a1aab309a2a8 Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Fri, 28 Jun 2024 10:53:50 +0100 Subject: [PATCH 3/9] minor tweaks to language and links for docs in the concepts section --- .../environments/auto-nuke.html.md.erb | 26 +++++++++--------- .../instance-scheduling.html.md.erb | 10 +++---- .../certificate-services.html.md.erb | 5 ++-- source/concepts/networking/dns.html.md.erb | 4 +-- .../instance-access-and-bastions.html.md.erb | 6 ++--- .../networking-approach.html.md.erb | 26 +++++++++--------- .../networking/subnet-allocation.html.md.erb | 5 ++-- .../concepts/sdlc/core-workflow.html.md.erb | 10 +++---- source/concepts/sdlc/repositories.html.md.erb | 6 ++--- .../concepts/sdlc/user-workflow.html.md.erb | 27 ++++++++++--------- source/index.html.md.erb | 4 +-- 11 files changed, 64 insertions(+), 65 deletions(-) diff --git a/source/concepts/environments/auto-nuke.html.md.erb b/source/concepts/environments/auto-nuke.html.md.erb index c008af2a1..b1ee86b9a 100644 --- a/source/concepts/environments/auto-nuke.html.md.erb +++ b/source/concepts/environments/auto-nuke.html.md.erb @@ -18,31 +18,31 @@ review_in: 6 months ## Feature description -This feature automatically nukes and optionally recreates development environments on weekly basis. This is useful for environments with the sandbox permission, which allow users provisioning resources directly through the AWS web console as opposite to using terraform. In such cases, the auto-nuke will make sure the resources created manually will be cleared on weekly basis. If requested, the resources defined in terraform will then be recreated. +This feature automatically destroys all resources in development environments on a weekly basis, and provides a utitily to recreate resources in these environments. This is useful for environments with the sandbox permission, which allow users to provision resources directly through the AWS web console alongside infrastructure as code (IaC). In such cases, the auto-nuke will make ensure the manually created resources will be regularly removed. If requested, resources defined in terraform can then be recreated. Every Sunday: -- At 10.00pm the awsnuke.yml workflow is triggered. This workflow nukes all the configured development environments using the AWS Nuke tool (https://github.com/rebuy-de/aws-nuke). -- At 12.00 noon the nuke-redeploy.yml workflow is triggered. If requested, this workflow redeploys the nuked environment using terraform apply. +- At 22:00 the [awsnuke.yml workflow](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/.github/workflows/awsnuke.yml) is triggered. This workflow nukes all the configured development environments using the [AWS Nuke tool](https://github.com/rebuy-de/aws-nuke). +- At 12:00 the [nuke-redeploy.yml workflow](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/.github/workflows/nuke-redeploy.yml) is triggered. If requested, this workflow redeploys IaC into the nuked environment using `terraform apply`. -A sketch of the algorithm is as follows: +An outline of the 'nuke' algorithm is as follows: -- For every account in a dynamically generated list of all sandbox accounts -- Assume the role MemberInfrastructureAccess under the account ID -- Nuke the resources under the account ID -- (Optionally) Perform terraform apply in order to recreate all resources from terraform +- For every account in a dynamically generated list of all sandbox accounts: + - Assume the [`MemberInfrastructureAccess` role](https://github.com/ministryofjustice/modernisation-platform/blob/ab3eb5a6a8e6253afc9db794362034ba4ae1cd94/terraform/environments/bootstrap/member-bootstrap/iam.tf#L266) under the account ID + - Nuke the resources under the account ID + - (Optionally) Perform terraform apply in order to recreate all resources from terraform ## Configuration Auto-nuke consumes the following dynamically generated Github secrets stored in the Modernisation Platorm Environments repository: -- `MODERNISATION_PLATFORM_AUTONUKE_BLOCKLIST`: Account aliases to always exclude from auto-nuke. This takes precedence over all other configuration options. Due to the destructive nature of the tool, AWS-Nuke (https://github.com/rebuy-de/aws-nuke) requires at least one Account ID in the configured blocklist. Our blocklist contains all production. preproduction and core accounts. +- `MODERNISATION_PLATFORM_AUTONUKE_BLOCKLIST`: Account aliases to always exclude from auto-nuke. This takes precedence over all other configuration options. Due to the destructive nature of the tool, [AWS-Nuke](https://github.com/rebuy-de/aws-nuke) requires at least one account ID in the configured blocklist. Our blocklist contains all production, preproduction, and core accounts. - `MODERNISATION_PLATFORM_AUTONUKE`: Account aliases of sandbox accounts to be auto-nuked on weekly basis. - `MODERNISATION_PLATFORM_AUTONUKE_REBUILD`: Accounts to be rebuilt after auto-nuke runs. This secret is consumed by the `nuke-redeploy.yml` workflow. -The `nuke-config-template.txt` is populated with account and blocklist information during the runtime of the `awsnuke.yml` workflow, to produce a valid aws-nuke configuration file. +The [`nuke-config-template.txt`](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/scripts/nuke-config-template.txt) is populated with account and blocklist information during the runtime of the `awsnuke.yml` workflow, to produce a valid aws-nuke configuration file. ### When new sandbox development environment is onboarded @@ -67,8 +67,8 @@ Eg: Valid values are: -`include` = nukes but doesn’t rebuild (default option if nothing added) -`exclude` = doesn’t nuke or rebuild -`rebuild` = nukes and rebuilds +- `include` = nukes but doesn’t rebuild (default option if nothing added) +- `exclude` = doesn’t nuke or rebuild +- `rebuild` = nukes and rebuilds Please contact us in [#ask-modernisation-platform](https://mojdt.slack.com/archives/C01A7QK5VM1) channel for details. diff --git a/source/concepts/environments/instance-scheduling.html.md.erb b/source/concepts/environments/instance-scheduling.html.md.erb index 1bf23eccd..10d75c6aa 100644 --- a/source/concepts/environments/instance-scheduling.html.md.erb +++ b/source/concepts/environments/instance-scheduling.html.md.erb @@ -18,9 +18,9 @@ review_in: 6 months ## Feature description -This feature automatically stops non-production EC2 and RDS instances overnight, in order to save on AWS costs and reduce environmental impact. Stopped instances don't incur charges, but Elastic IP addresses or EBS volumes attached to those instances do. +This feature automatically stops non-production EC2 and RDS instances overnight and over each weekend, in order to save on AWS costs and reduce environmental impact. Stopped instances don't incur charges, but Elastic IP addresses or EBS volumes attached to those instances do. -The instances will be automatically stopped every weekday at 9pm night and started at 6am in the morning. By default, this includes every EC2 and RDS instance in every non-production environment (development, test, pre-production) without requiring any configuration from the end user. Users can customise the default behaviour by attaching the `instance-scheduling` tag to EC2 and RDS instances with one of the following values: +The instances will be automatically [stopped each weekday at 21:00](https://github.com/ministryofjustice/modernisation-platform/blob/19a7e48b366cfbb9d24c30f4620b12df886baa8e/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf#L35) and [started at 06:00 each weekday](https://github.com/ministryofjustice/modernisation-platform/blob/19a7e48b366cfbb9d24c30f4620b12df886baa8e/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf#L61) morning, which includes shut down on Friday night and startup on Monday morning. By default, this includes every EC2 and RDS instance in every non-production environment (development, test, preproduction) without requiring any configuration from the end user. Users can customise the default behaviour by attaching the `instance-scheduling` tag to EC2 and RDS instances with one of the following values: - `default` - Automatically stop the instance overnight and start it in the morning. Absence of the `instance-scheduling` tag will have the same effect. - `skip-scheduling` - Skip auto scheduling for the instance @@ -44,15 +44,15 @@ Ordering instances and automatically stopping them on public holidays is not sup For those teams that require the shutdown & startup of ec2 & rds resources in a specific order or at different times, the option exists to make use of github workflows & cron schedules to stop & start services. -- These workflows can be run from the application source github via the use of oidc for authenticaiton to the Modernisation Platform - see https://user-guide.modernisation-platform.service.justice.gov.uk/user-guide/deploying-your-application.html#deploying-your-application. It is recommended to hold the AWS account number for the member account as a github secret, especially if the repo is public. +- These workflows can be run from the application source github [via the use of oidc for authentication to the Modernisation Platform](https://user-guide.modernisation-platform.service.justice.gov.uk/user-guide/deploying-your-application.html#deploying-your-application). It is recommended to hold the AWS account number for the member account as a github secret, especially if the repo is public. -- An example of how to use a github workflow to meet this requirement can be found here - https://github.com/ministryofjustice/modernisation-platform-configuration-management/blob/main/.github/workflows/flexible-instance-stop-start.yml. Note that the workflow uses a separate script to run the AWS CLI commands for shutdown & startup. These can be easily reused & customised to meet specific needs. +- An example of how to use a github workflow to meet this requirement can be [found here](https://github.com/ministryofjustice/modernisation-platform-configuration-management/blob/main/.github/workflows/flexible-instance-stop-start.yml). Note that the workflow uses [a separate script](https://github.com/ministryofjustice/modernisation-platform-configuration-management/blob/main/scripts/flexistopstart.sh) to run the AWS CLI commands for shutdown & startup. These can be easily reused & customised to meet specific needs. - EC2 or RDS resources that are stopped or started in this manner must have the `skip-scheduling` tag added as described above. - Note that there are some restrictions that come with using github schedules - most importantly that github themselves do not guarantee execution of the action at the specified time. Actions can be delayed at busy times or even dropped entirely so it is recommended to avoid schedules running on-the-hour or half-hour. -Further information regarding github schedule events can be found here - https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule +Further information regarding github schedule events can be [found here](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule). ## References diff --git a/source/concepts/networking/certificate-services.html.md.erb b/source/concepts/networking/certificate-services.html.md.erb index 5cf83e2da..d45e19778 100644 --- a/source/concepts/networking/certificate-services.html.md.erb +++ b/source/concepts/networking/certificate-services.html.md.erb @@ -18,14 +18,13 @@ review_in: 6 months ## Public Certificates -There are two main ways to use public certificates for DNS on the Modernisation Platform; ACM (Amazon Certificate Manager) public certificates, and Gandi.net certificates imported into ACM. -Please see [How to configure DNS for public services](../../user-guide/how-to-configure-dns.html) for more information. +There are two main ways to use public certificates for DNS on the Modernisation Platform; [ACM (Amazon Certificate Manager)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) public certificates, and Gandi.net certificates imported into ACM. Please see [How to configure DNS for public services](../../user-guide/how-to-configure-dns.html) for more information. ## Private Certificates We provide a [Private root Certificate Authority (CA)](https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaWelcome.html) in the [network services account and VPC](networking-approach.html#other-vpcs), along with subordinate production and non production CAs. -The subordinate CA's are then shared to the application environments via a [RAM](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) share (either production or non production depending on the environment). +The subordinate CA's are then shared to the application environments via a [Resource Access Manager (RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) share (either production or non-production depending on the environment). Certificates can then be created using the Private subordinate CA, the certificates remain local to the application environment. diff --git a/source/concepts/networking/dns.html.md.erb b/source/concepts/networking/dns.html.md.erb index 36ef2d4f0..514dacafb 100644 --- a/source/concepts/networking/dns.html.md.erb +++ b/source/concepts/networking/dns.html.md.erb @@ -16,9 +16,9 @@ review_in: 6 months # <%= current_page.data.title %> -DNS is centralised in the networking services account. +DNS is centralised in the core networking services account. -We use AWS Route53 to provide and manage DNS records. +We use [AWS Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) to provide and manage DNS records. There are public and private [hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) for the Modernisation Platform. diff --git a/source/concepts/networking/instance-access-and-bastions.html.md.erb b/source/concepts/networking/instance-access-and-bastions.html.md.erb index e68eb8355..6d2335a2e 100644 --- a/source/concepts/networking/instance-access-and-bastions.html.md.erb +++ b/source/concepts/networking/instance-access-and-bastions.html.md.erb @@ -20,7 +20,7 @@ review_in: 6 months For most EC2 running modern Linux operating systems, [SSH](https://en.wikipedia.org/wiki/Secure_Shell_Protocol) access will be via [AWS Systems Manager Session Manager (SSM)](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html). -This provides secure and auditable access to EC2s without the need to expose ports or use a bastion. This can also be used for port forwarding to access private web consoles, [RDS databases](https://aws.amazon.com/rds/) or Windows [RDP](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol). +This provides secure and auditable access to EC2s without the need to expose ports or use a bastion. This can also be used for port forwarding to access private web consoles, [RDS databases](https://aws.amazon.com/rds/) or [Windows Remote Desktop (RDP)](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol). ## Bastions @@ -29,9 +29,9 @@ For instances running older versions of Linux where the [SSM Agent](https://docs The bastion will be preconfigured with the relevant security and network connectivity required. You can then securely connect to this bastion host via Systems Manager, and then on to your instance. -If you find the bastion is down (between 20:00 and 05:00) then you may need to restart it. The best way to do this is to amend the Auto Scaling Group called bastion_linux_daily to set the values to 1 where they are 0. This will build a bastion EC2 server. +If you find the bastion is down (between 20:00 and 05:00) then you may need to restart it. The best way to do this is to amend the Auto Scaling Group called `bastion_linux_daily` to set the values to `1` where they are `0`. This will build a bastion EC2 server. -There will only be 1 listed in most cases (bastion_linux_daily) so select that, click on edit in the top box and set all 3 values (desired capacity, minimum capacity and maximum capacity) to 1 and select Update. This will cause AWS to build a new instance and one will be available in around 5 minutes. +There will only be 1 listed in most cases (`bastion_linux_daily`) so select that, click on edit in the top box and set all 3 values (desired capacity, minimum capacity and maximum capacity) to `1` and select Update. This will cause AWS to build a new instance and one will be available in around 5 minutes. ## How to connect For information on how to connect to instances or Bastions see [Accessing EC2s](../../user-guide/accessing-ec2s.html). diff --git a/source/concepts/networking/networking-approach.html.md.erb b/source/concepts/networking/networking-approach.html.md.erb index 4ebf221af..f8428e895 100644 --- a/source/concepts/networking/networking-approach.html.md.erb +++ b/source/concepts/networking/networking-approach.html.md.erb @@ -18,18 +18,18 @@ review_in: 6 months ### What we're trying to fix -Networking is hard, setting up a landing zone, a VPC, subnets, endpoints, peering, NACLS, gateways, etc, is both complex and time consuming. +Networking is hard. Setting up a landing zone, VPCs, subnets, endpoints, peering, NACLS, gateways, etc, is both complex and time consuming. We want to take care of this so that users can focus on what is important to them - their application. ### What we investigated -We looked into having a [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) per account. Although this provides good network isolation, because we have one account per application environment, it would mean a lot of VPC peering would be required to connect one application to another if needed. +We looked into having a [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) per account. While this provides good network isolation, because we have one account per application environment, it would mean a lot of VPC peering would be required to connect applications together if needed. ### What we decided #### VPCs -Instead of creating a VPC in each account, we have created separate environment accounts, and have one VPC per business unit, per environment. +Instead of creating a VPC in each account, we have organised the VPCs into central 'environment' accounts (one account per [SDLC stage](https://en.wikipedia.org/wiki/Systems_development_life_cycle#Environments)). Within the environment accounts there is one VPC per business unit, per environment. So we have the following environment accounts: @@ -38,19 +38,19 @@ So we have the following environment accounts: * test * development -Within these environment accounts there is a VPC per business unit. For example one VPC for the LAA, and one for HMPPS. +Within these environment accounts there is one VPC per business unit. For example within the production account there is one VPC for LAA, and one for HMPPS. -These VPCs are then shared via [RAM](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) to the application accounts. +These VPCs are then shared via [Resource Access Manager (RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) to the application accounts. -For example the production LAA VPC may be shared to multiple LAA application accounts, this enables LAA applications to communicate with each other without the need for [VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) as they are using the same VPC. +For example the production LAA VPC may be shared to multiple LAA application accounts. This enables LAA applications to communicate with each other without the need for [VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) as they are using the same VPC. -Connection to other VPCs, for example if an LAA application needs to communicate with a HMPPS application, is done through the [Transit Gateway](#what-we-decided-transit-gateway), [NACLs](#nacls) can be opened to allow access, and [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) from other accounts can be referenced. +Connection to other VPCs - i.e. if an LAA application needs to communicate with a HMPPS application - is done through the [Transit Gateway](#what-we-decided-transit-gateway). [NACLs](#nacls) can be opened to allow access, and [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) from other accounts can be referenced. #### Subnets ##### General -Each VPC will have a general subnet set - a set of subnets that can be used for most of the application accounts. These are shared to the application accounts using RAM. +Each VPC will have a general subnet set - subnets that can be used for most of the application accounts. These are shared to the application accounts using RAM. For most business areas, the general subset set will be enough, but we can always create more subnet sets if needed. @@ -60,7 +60,7 @@ The subnet sets contain three types of subnet: * Private (for private resources such as application servers) * Data (for data resources such as databases) -Each of the different subnet types are spread across all three London region [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones) (eu-west-2a, eu-west-2b and eu-west2c), making a total of nine subnets. +Each of the different subnet types are spread across all three London region [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones) (`eu-west-2a`, `eu-west-2b`, and `eu-west-2c`), making a total of nine subnets. For more information on the subnet ranges, see the [subnet allocation](subnet-allocation.html) page. @@ -75,13 +75,13 @@ Protected subnets are created per account and used for [VPC endpoints](https://d ##### Transit Gateway -The [transit gateway](#what-we-decided-transit-gateway) subnets are created per account to allow access to other accounts and services via the transit gateway. +The [transit gateway](#what-we-decided-transit-gateway) subnets are created per-account to allow access to other accounts and services via the transit gateway. #### NACLs -Access to the subnet sets is controlled with [Network ACLs (NACLs)](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html). +Access to the subnet sets is controlled with [Network Access Control Lists (NACLs)](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html). -NACLs allow traffic in and out of the Modernisation Platform (North/South), but prevent traffic from traversion business unit VPCs (East/West). +NACLs allow traffic in and out of the Modernisation Platform (North/South), but prevent traffic from traversing business unit VPCs (East/West). Traffic within a VPC is not restricted. @@ -93,7 +93,7 @@ AWS [Network Firewalls](network-firewall.html) provide additional controls to tr There are other VPCs in the Modernisation Platform core infrastructure accounts, these are connected to the [transit gateway](#what-we-decided-transit-gateway) as well. -Examples of some of these that you will connect to are: +Some examples that you will connect to: * network-services (where the transit gateway lives) * shared-services diff --git a/source/concepts/networking/subnet-allocation.html.md.erb b/source/concepts/networking/subnet-allocation.html.md.erb index c4426710f..5a1b98741 100644 --- a/source/concepts/networking/subnet-allocation.html.md.erb +++ b/source/concepts/networking/subnet-allocation.html.md.erb @@ -19,7 +19,7 @@ review_in: 6 months When we set up a VPC for an environment (AWS account) we provide subnets for transit gateway connection, protected reources such as VPC endpoints, along with a general subnet set. See the [subnets](./networking-approach.html#subnets) section for more information. -Here we go into a bit more detail on how the [CIDR ranges](https://github.com/ministryofjustice/modernisation-platform/blob/main/cidr-allocation.md) have been created and how they are allocated. +Here we go into a bit more detail on how the [CIDR ranges](https://aws.amazon.com/what-is/cidr/) have been created and [how they are allocated](https://github.com/ministryofjustice/modernisation-platform/blob/main/cidr-allocation.md). ## Transit Gateway and Protected subnets allocation @@ -31,8 +31,7 @@ Here we go into a bit more detail on how the [CIDR ranges](https://github.com/mi ## How have we decided the ranges? -Research was done on the existing MoJ network infrastructure to ensure that we didn't clash with any existing ranges. -The modernisation platform [CIDR ranges](https://github.com/ministryofjustice/modernisation-platform/blob/main/cidr-allocation.md) are documented here. By predefining IP ranges it makes it easier for us to onboard new applications. +We analysed the existing MoJ network infrastructure to ensure that we didn't clash with any existing ranges. The modernisation platform [CIDR ranges](https://github.com/ministryofjustice/modernisation-platform/blob/main/cidr-allocation.md) are documented here. Predefining these IP ranges it makes it easier for us to onboard new applications. ## Example diff --git a/source/concepts/sdlc/core-workflow.html.md.erb b/source/concepts/sdlc/core-workflow.html.md.erb index 392d9a4b6..65b405e80 100644 --- a/source/concepts/sdlc/core-workflow.html.md.erb +++ b/source/concepts/sdlc/core-workflow.html.md.erb @@ -1,6 +1,6 @@ --- owner_slack: "#modernisation-platform" -title: Core Workflow (CI/CD) +title: Core Workflows (CI/CD) last_reviewed_on: 2024-06-06 review_in: 6 months --- @@ -22,9 +22,9 @@ We use [trunk base development](https://www.atlassian.com/continuous-delivery/co ## CI/CD -For our CI/CD pipelines we use [GitHub actions](https://docs.github.com/en/actions). +For our CI/CD pipelines we use [GitHub Actions](https://docs.github.com/en/actions). -Workflow files are found [here](https://github.com/ministryofjustice/modernisation-platform/tree/main/.github/workflows) +Workflow files are [found here](https://github.com/ministryofjustice/modernisation-platform/tree/main/.github/workflows) ### Terraform workflows @@ -56,12 +56,12 @@ These workflows create the new files needed for new member accounts. | Name | Description | Workflow file | | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------- | -| Publish | Publishes pages in `source` to our GitHub pages user guidance | `publish-gh-pages.yml` | +| Publish | Publishes pages in the `source/` directory to our GitHub pages user guidance | `publish-gh-pages.yml` | | Format Code | Formats code once a week and raises a PR for review | `format-code.yml` | | Labeler | Adds labels to our pull requests depending on which folders are changed | `labeler.yml` | | OPA Policies | Runs Open Policy Agent validation tests against our json files | `opa-policies.yml` | | Scheduled Baseline | Runs the baseline code across all accounts ensuring security baselines are still in place | `scheduled-baseline.yml` | -| Terraform Static Code Analysis | Runs Trivy, Checkov and TFlint against all Terraform code | `terraform-static-analysis.yml` | +| Terraform Static Code Analysis | Runs [Trivy](https://github.com/aquasecurity/trivy), [Checkov](https://github.com/bridgecrewio/checkov?tab=readme-ov-file) and [TFlint](https://github.com/terraform-linters/tflint?tab=readme-ov-file#tflint) against all Terraform code | `terraform-static-analysis.yml` | | Generate Dependabot File | Generates a new dependabot file to add any newly added Terraform folders | `generate-dependabot-file.yml` | | Add issues to project | On new modernisation-platform repository issue creation adds the new issue to the Modernisation Platform project | `add-issues-to-project.yml` | | Terraform Documentation | Generates Terraform module documentation | `documentation.yml` | diff --git a/source/concepts/sdlc/repositories.html.md.erb b/source/concepts/sdlc/repositories.html.md.erb index 8937e63e3..cdfcf0e4a 100644 --- a/source/concepts/sdlc/repositories.html.md.erb +++ b/source/concepts/sdlc/repositories.html.md.erb @@ -24,13 +24,13 @@ There are two main repositories for the Modernisation Platform: [modernisation-platform](https://github.com/ministryofjustice/modernisation-platform) -This contains our core infrastructure, Architecture Decision Record (ADR), user guidance, user environment creation and networking definitions and core workflows. +This contains our core infrastructure, [Architecture Decision Records (ADRs)](https://github.com/ministryofjustice/modernisation-platform/tree/main/architecture-decision-record#modernisation-platform---architecture-decisions), user guidance, user environment creation and networking definitions and core workflows. ### modernisation-platform-environments [modernisation-platform-environments](https://github.com/ministryofjustice/modernisation-platform-environments) -This contains user environment resources and workflows. We have this repo so that users can easily find and amend there infrastructure from one place, and we can clearly separate user and core code and workflows. +This contains user environment resources and workflows. This repo exists so users can easily find and amend their infrastructure in one place, and the Modernisation Platform team can clearly separate user and core code and workflows. ### Other repositories @@ -47,7 +47,7 @@ Repository for creating pipelines to build AMIs for use on the platform. Repository for configuration management code used on the platform. We also have repositories for [Terraform modules](https://www.terraform.io/docs/language/modules/develop/index.html), these modules are an easy way to build up your infrastructure with sensible defaults and we would encourage you to use these where possible. -You can see a full list of these repositories on the main [modernisation-platform](https://github.com/ministryofjustice/modernisation-platform) readme. +You can see a full list of these repositories in the [modernisation-platform readme](https://github.com/ministryofjustice/modernisation-platform?tab=readme-ov-file#other-useful-repositories). ## Diagram diff --git a/source/concepts/sdlc/user-workflow.html.md.erb b/source/concepts/sdlc/user-workflow.html.md.erb index e405ec33d..ad28c2662 100644 --- a/source/concepts/sdlc/user-workflow.html.md.erb +++ b/source/concepts/sdlc/user-workflow.html.md.erb @@ -1,6 +1,6 @@ --- owner_slack: "#modernisation-platform" -title: User Workflow (CI/CD) +title: User Workflows (CI/CD) last_reviewed_on: 2024-03-19 review_in: 6 months --- @@ -50,7 +50,7 @@ Environment workflow: We use [GitHub Environments](https://docs.github.com/en/actions/reference/environments) to create a manual approval gate before each deployment. This gives you chance to review the Terraform plan before approving the deployment. The GitHub environments are automatically created by the [git-create-environments-script.sh](https://github.com/ministryofjustice/modernisation-platform/blob/main/scripts/git-create-environments.sh) using your GitHub team as the reviewer, as part of the initial account creation process. -After approving, the infrastructure is deployed with the [Terraform apply](https://www.terraform.io/docs/cli/commands/apply.html) command. +After approving, the infrastructure is deployed with the [`terraform apply`](https://www.terraform.io/docs/cli/commands/apply.html) command. ### Permissions @@ -59,8 +59,8 @@ The code and AWS account is protected in a few different ways which work togethe #### CODEOWNERS Your GitHub team will be assigned as a [codeowner](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-code-owners) for your application folder, so someone in your team or the Modernisation Platform team will be required to review any pull requests before they can be merged. -Your Github team will be able to approve a majority of pull requests. Approvals from the modernisation platform team members are only required in cases where a change might impact other customers or core platform components such as any files in the `.github` folder, as well as `providers.tf`, `backend.tf` and `networking.auto.tfvars.json` files in your application directory. For specific rules please see the [CODEOWNERS](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/.github/CODEOWNERS) file. -For modernisation-platform-ami-builds (https://github.com/ministryofjustice/modernisation-platform-ami-builds) you will need to add your team to CODEOWNERS when you add a team to the list. See the file for examples. +Members of your Github team will be able to approve a majority of pull requests. Approvals from the modernisation platform team members are only required in cases where a change might impact other customers or core platform components such as any files in the `.github/` directory, as well as `providers.tf`, `backend.tf` and `networking.auto.tfvars.json` files in your application directory. For specific rules please see the [`CODEOWNERS`](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/.github/CODEOWNERS) file. +For [modernisation-platform-ami-builds](https://github.com/ministryofjustice/modernisation-platform-ami-builds) you will need to add your team to `CODEOWNERS` when you add a team to the list. See the file for examples. #### GitHub Environments @@ -68,28 +68,29 @@ GitHub environments prevents any workflow being deployed to your environments un #### AWS IAM (Identity and Access Management) -Each account has an IAM role `MemberInfrastructureAccess` which allows the GitHub workflows to create resources in each AWS account. +Each account has a [`MemberInfrastructureAccess`](https://github.com/ministryofjustice/modernisation-platform/blob/ab3eb5a6a8e6253afc9db794362034ba4ae1cd94/terraform/environments/bootstrap/member-bootstrap/iam.tf#L266) IAM role that allows the GitHub workflows to create resources in each AWS account. Each application workflow will use the role for the relevant account, ensuring one account can't create resources in another. -For modifying DNS entries a role `dns--` is used, allowing only changes the DNS hosted zone for your business unit. +For modifying DNS entries, a `dns--` role is used. It is only capable of changes to the DNS hosted zone for your particular business unit. -For creating certificates, a role `modify-dns-records` in the core-network-services account is used to create DNS validation records. +For creating certificates, a `modify-dns-records` role in the core-network-services account is used to create DNS validation records. -#### AWS SCPs (Service Control Policies) +#### AWS SCPs -SCPs prevent certain actions from running which may have a detrimental effect on the platform. +[Service Control Policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) prevent certain actions from running which may have a detrimental effect on the platform. -These are applied at a higher OU (organisational unit) level and are inherited by the application OUs. +These are applied at a higher Organisational Unit (OU) level and are inherited by the application OUs. ## Deploying Applications -Legacy applications often use "Click Ops" to make application deployments. Whilst this is still possible on the Modernisation Platform, we encourage and can help people to build application deployment pipelines. +Legacy applications often use "Click Ops" to make application deployments (manual configuration through the AWS console). Whilst this is still possible on the Modernisation Platform, we encourage and can help people to build application deployment pipelines. -To allow automated access to your AWS account we provide a "OIDC CI/CD (Continuous integration / continuous delivery)" role - `modernisation-platform-oidc-cicd`. +To allow automated access to your AWS account we provide an "OIDC CI/CD (Continuous integration / continuous delivery)" role - `modernisation-platform-oidc-cicd`. -This user has restricted access to your AWS account, with the minimum permissions needed to do things like push a new image to an ECR repo. +This role has restricted access to your AWS account, with the minimum permissions needed to do things like push a new image to an ECR repo. The application pipeline is the responsibility of the application owner, details on using the role are detailed [here](../../user-guide/deploying-your-application.html) + ## Diagram ![Member CI/CD](../../images/member-ci-cd.png) diff --git a/source/index.html.md.erb b/source/index.html.md.erb index 7b647e90e..4cd5a69a2 100644 --- a/source/index.html.md.erb +++ b/source/index.html.md.erb @@ -83,8 +83,8 @@ This documentation is for anyone interested in the Modernisation Platform and it ### Software Development Lifecycle - [Repositories](concepts/sdlc/repositories.html) -- [Core Workflow (CI/CD)](concepts/sdlc/core-workflow.html) -- [User Workflow (CI/CD)](concepts/sdlc/user-workflow.html) +- [Core Workflows (CI/CD)](concepts/sdlc/core-workflow.html) +- [User Workflows (CI/CD)](concepts/sdlc/user-workflow.html) - [Testing Strategy](concepts/sdlc/testing-strategy.html) - [Sandbox and testing environments](concepts/sdlc/sandbox-testing-environments.html) - [Patching](concepts/sdlc/patching.html) From dc115988c0ee8aa3e04ae730872a3e215bf28b4b Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Fri, 28 Jun 2024 13:28:34 +0100 Subject: [PATCH 4/9] update `last_reviewed_on` for pages --- source/concepts/environments/auto-nuke.html.md.erb | 2 +- source/concepts/environments/instance-scheduling.html.md.erb | 2 +- source/concepts/environments/problem-and-solution.html.md.erb | 2 +- source/concepts/environments/security.html.md.erb | 2 +- source/concepts/environments/single-sign-on.html.md.erb | 2 +- source/concepts/networking/certificate-services.html.md.erb | 2 +- source/concepts/networking/dns.html.md.erb | 2 +- .../networking/instance-access-and-bastions.html.md.erb | 2 +- source/concepts/networking/networking-approach.html.md.erb | 2 +- source/concepts/networking/subnet-allocation.html.md.erb | 2 +- source/concepts/sdlc/core-workflow.html.md.erb | 2 +- source/concepts/sdlc/repositories.html.md.erb | 2 +- source/concepts/sdlc/user-workflow.html.md.erb | 2 +- source/index.html.md.erb | 2 +- 14 files changed, 14 insertions(+), 14 deletions(-) diff --git a/source/concepts/environments/auto-nuke.html.md.erb b/source/concepts/environments/auto-nuke.html.md.erb index b1ee86b9a..b082e0f06 100644 --- a/source/concepts/environments/auto-nuke.html.md.erb +++ b/source/concepts/environments/auto-nuke.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Auto-nuke and redeploy development environments on weekly basis -last_reviewed_on: 2024-06-20 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/environments/instance-scheduling.html.md.erb b/source/concepts/environments/instance-scheduling.html.md.erb index 10d75c6aa..a7abe15ce 100644 --- a/source/concepts/environments/instance-scheduling.html.md.erb +++ b/source/concepts/environments/instance-scheduling.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Instance Scheduling - automatically stop non-production instances overnight -last_reviewed_on: 2024-03-15 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/environments/problem-and-solution.html.md.erb b/source/concepts/environments/problem-and-solution.html.md.erb index 03ff36d64..c58fbd096 100644 --- a/source/concepts/environments/problem-and-solution.html.md.erb +++ b/source/concepts/environments/problem-and-solution.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Environments (AWS accounts) -last_reviewed_on: 2024-02-21 +last_reviewed_on: 2024-06-28 review_in: 12 months --- diff --git a/source/concepts/environments/security.html.md.erb b/source/concepts/environments/security.html.md.erb index 7e47a2021..8b48eb7a4 100644 --- a/source/concepts/environments/security.html.md.erb +++ b/source/concepts/environments/security.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Environments (AWS accounts) security -last_reviewed_on: 2024-06-14 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/environments/single-sign-on.html.md.erb b/source/concepts/environments/single-sign-on.html.md.erb index e82235484..bf575d04d 100644 --- a/source/concepts/environments/single-sign-on.html.md.erb +++ b/source/concepts/environments/single-sign-on.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Single Sign On -last_reviewed_on: 2024-05-21 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/networking/certificate-services.html.md.erb b/source/concepts/networking/certificate-services.html.md.erb index d45e19778..210879fa5 100644 --- a/source/concepts/networking/certificate-services.html.md.erb +++ b/source/concepts/networking/certificate-services.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Certificate Services -last_reviewed_on: 2024-05-31 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/networking/dns.html.md.erb b/source/concepts/networking/dns.html.md.erb index 514dacafb..6ebbade47 100644 --- a/source/concepts/networking/dns.html.md.erb +++ b/source/concepts/networking/dns.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: DNS -last_reviewed_on: 2024-01-31 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/networking/instance-access-and-bastions.html.md.erb b/source/concepts/networking/instance-access-and-bastions.html.md.erb index 6d2335a2e..3ec84f817 100644 --- a/source/concepts/networking/instance-access-and-bastions.html.md.erb +++ b/source/concepts/networking/instance-access-and-bastions.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Instance Access and Bastions -last_reviewed_on: 2024-05-31 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/networking/networking-approach.html.md.erb b/source/concepts/networking/networking-approach.html.md.erb index f8428e895..6e79d0b37 100644 --- a/source/concepts/networking/networking-approach.html.md.erb +++ b/source/concepts/networking/networking-approach.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Networking Approach -last_reviewed_on: 2024-06-13 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/networking/subnet-allocation.html.md.erb b/source/concepts/networking/subnet-allocation.html.md.erb index 5a1b98741..ba759fb81 100644 --- a/source/concepts/networking/subnet-allocation.html.md.erb +++ b/source/concepts/networking/subnet-allocation.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Subnet Allocation -last_reviewed_on: 2024-06-13 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/sdlc/core-workflow.html.md.erb b/source/concepts/sdlc/core-workflow.html.md.erb index 65b405e80..736074dae 100644 --- a/source/concepts/sdlc/core-workflow.html.md.erb +++ b/source/concepts/sdlc/core-workflow.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Core Workflows (CI/CD) -last_reviewed_on: 2024-06-06 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/sdlc/repositories.html.md.erb b/source/concepts/sdlc/repositories.html.md.erb index cdfcf0e4a..c7860e9ea 100644 --- a/source/concepts/sdlc/repositories.html.md.erb +++ b/source/concepts/sdlc/repositories.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Repositories -last_reviewed_on: 2024-06-13 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/concepts/sdlc/user-workflow.html.md.erb b/source/concepts/sdlc/user-workflow.html.md.erb index ad28c2662..54247185e 100644 --- a/source/concepts/sdlc/user-workflow.html.md.erb +++ b/source/concepts/sdlc/user-workflow.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: User Workflows (CI/CD) -last_reviewed_on: 2024-03-19 +last_reviewed_on: 2024-06-28 review_in: 6 months --- diff --git a/source/index.html.md.erb b/source/index.html.md.erb index 4cd5a69a2..504b481a5 100644 --- a/source/index.html.md.erb +++ b/source/index.html.md.erb @@ -1,7 +1,7 @@ --- owner_slack: "#modernisation-platform" title: Modernisation Platform -last_reviewed_on: 2024-04-22 +last_reviewed_on: 2024-06-28 review_in: 6 months weight: 0 --- From c8ed5cc7b01876b2d9b330681eb872103d044d00 Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Fri, 28 Jun 2024 15:15:35 +0100 Subject: [PATCH 5/9] add lychee config to exclude internal repos update lychee command to use config toml --- .github/workflows/gh-pages-test-links.yml | 2 +- config/lychee.toml | 46 +++++++++++++++++++++++ 2 files changed, 47 insertions(+), 1 deletion(-) create mode 100644 config/lychee.toml diff --git a/.github/workflows/gh-pages-test-links.yml b/.github/workflows/gh-pages-test-links.yml index c7419f764..9eccf729f 100644 --- a/.github/workflows/gh-pages-test-links.yml +++ b/.github/workflows/gh-pages-test-links.yml @@ -24,5 +24,5 @@ env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: - args: --verbose --no-progress './**/*.md' './**/*.html' './**/*.erb' --exclude-loopback --accept 403,200,429 + args: --verbose --no-progress './**/*.md' './**/*.html' './**/*.erb' --config config/lychee.toml fail: true \ No newline at end of file diff --git a/config/lychee.toml b/config/lychee.toml new file mode 100644 index 000000000..c136c8ecc --- /dev/null +++ b/config/lychee.toml @@ -0,0 +1,46 @@ +############################# Display ############################# + +# Verbose program output +# Accepts log level: "error", "warn", "info", "debug", "trace" +verbose = "info" + +# Don't show interactive progress bar while checking links. +no_progress = true + +############################# Cache ############################### + +# Enable link caching. This can be helpful to avoid checking the same links on +# multiple runs. +cache = true + +# Discard all cached requests older than this duration. +max_cache_age = "2d" + +############################# Requests ############################ + +# User agent to send with each request. +user_agent = "curl/7.83. 1" + +# Website timeout from connect to response finished. +timeout = 2 + +# Minimum wait time in seconds between retries of failed requests. +retry_wait_time = 2 + +# Comma-separated list of accepted status codes for valid links. +# Supported values are: +accept = ["200", "401", "403", "429"] +############################# Exclusions ########################## + +# Exclude URLs and mail addresses from checking (supports regex). +exclude = [ + '^https://github\.com/ministryofjustice/[\w-]+/settings/.*', + '^https://github\.com/ministryofjustice/modernisation-platform-security', + '^https://github\.com/ministryofjustice/deployment-tgw', + '^https://moj-digital-tools.pagerduty.com', +] + +# Exclude all private IPs from checking. +# Equivalent to setting `exclude_private`, `exclude_link_local`, and +# `exclude_loopback` to true. +exclude_all_private = true From 86cfdf75fac56a53eb887657a97eecce11ec474a Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Fri, 28 Jun 2024 15:16:02 +0100 Subject: [PATCH 6/9] update broken links --- .../0006-use-a-multi-account-strategy-for-applications.md | 2 +- .../0009-use-secrets-manager-for-secrets.md | 2 +- source/concepts/environments/security.html.md.erb | 2 +- source/runbooks/accessing-aws-accounts.html.md.erb | 2 +- source/runbooks/deleting-an-environment.html.md.erb | 4 ++-- .../creating-automated-terraform-documentation.html.md.erb | 4 ++-- source/user-guide/creating-environments.html.md.erb | 2 +- source/user-guide/how-to-configure-dns.html.md.erb | 6 +++--- terraform/environments/README.md | 2 +- terraform/github/README.md | 2 +- terraform/modernisation-platform-account/README.md | 2 +- 11 files changed, 15 insertions(+), 15 deletions(-) diff --git a/architecture-decision-record/0006-use-a-multi-account-strategy-for-applications.md b/architecture-decision-record/0006-use-a-multi-account-strategy-for-applications.md index 5f7327114..09742f4b1 100644 --- a/architecture-decision-record/0006-use-a-multi-account-strategy-for-applications.md +++ b/architecture-decision-record/0006-use-a-multi-account-strategy-for-applications.md @@ -12,7 +12,7 @@ In the Modernisation Platform, we want to reduce the blast radius and increase t ## Decision -We've decided to use a multi-account strategy, split by application. We have a complete write-up as part of our [environments concept](https://user-guide.modernisation-platform.service.justice.gov.uk/concepts/environments/). +We've decided to use a multi-account strategy, split by application. We have a complete write-up as part of our [environments concept](https://user-guide.modernisation-platform.service.justice.gov.uk/#environments-aws-accounts). ## Consequences diff --git a/architecture-decision-record/0009-use-secrets-manager-for-secrets.md b/architecture-decision-record/0009-use-secrets-manager-for-secrets.md index 20ec5637c..8e5defe29 100644 --- a/architecture-decision-record/0009-use-secrets-manager-for-secrets.md +++ b/architecture-decision-record/0009-use-secrets-manager-for-secrets.md @@ -16,7 +16,7 @@ There are also other well known industry solutions such as [HashiCorp Vault](htt We've decided to use [Secrets Manager](https://aws.amazon.com/secrets-manager/) for our secrets storage. -Parameter store can be used to store non secret parameters if needed for environment specific configuration, but the first choice should be using an app_variables.json like [here](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/terraform/environments/sprinkler/app_variables.json) +Parameter store can be used to store non secret parameters if needed for environment specific configuration, but the first choice should be using an [`application_variables.json` such as this](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/terraform/environments/sprinkler/application_variables.json) ## Consequences diff --git a/source/concepts/environments/security.html.md.erb b/source/concepts/environments/security.html.md.erb index 8b48eb7a4..115ad0035 100644 --- a/source/concepts/environments/security.html.md.erb +++ b/source/concepts/environments/security.html.md.erb @@ -39,7 +39,7 @@ We can see an overview of compliance across the Modernisation Platform and we wi ## Regional restrictions -We restrict the regional usage of accounts that sit within the Modernisation Platform. We use a [Service Control Policy (SCP)](https://github.com/ministryofjustice/aws-root-account/blob/main/terraform/organizations-service-control-policies.tf#L40) to do this. +We restrict the regional usage of accounts that sit within the Modernisation Platform. We use a [Service Control Policy (SCP)](https://github.com/ministryofjustice/aws-root-account/blob/1ec842ec7b1356898bbca9cdb55f7dc64a9b6643/management-account/terraform/organizations-policy-service-control.tf#L40) to do this. In accordance with the [Security Guidance](https://ministryofjustice.github.io/security-guidance/baseline-aws-accounts/#regions), you should only use EU AWS regions. diff --git a/source/runbooks/accessing-aws-accounts.html.md.erb b/source/runbooks/accessing-aws-accounts.html.md.erb index 424506f60..11b7f6967 100644 --- a/source/runbooks/accessing-aws-accounts.html.md.erb +++ b/source/runbooks/accessing-aws-accounts.html.md.erb @@ -33,7 +33,7 @@ access key and secret key for programmatic access. _NB. Superuser access is maintained for emergencies. In most use cases SSO access is preferred._ -Using a web browser, a user with a superuser account can navigate to the [AWS console](https://console.aws.amazon.con) and +Using a web browser, a user with a superuser account can navigate to the [AWS console](https://console.aws.amazon.com/) and log into the Modernisation Platform with their *firstname.lastname-superadmin* account. From here the user can assume an IAM role to escalate their privileges by clicking the *username @ account-id* dropdown and selecting *Switch Role*. diff --git a/source/runbooks/deleting-an-environment.html.md.erb b/source/runbooks/deleting-an-environment.html.md.erb index 775bd0f87..35956a9b5 100644 --- a/source/runbooks/deleting-an-environment.html.md.erb +++ b/source/runbooks/deleting-an-environment.html.md.erb @@ -43,8 +43,8 @@ They are all triggered in sequence by the `delete-accounts.sh` script. To use the scripts, follow these steps: -1. Navigate to the [account-deletion folder](https://github.com/ministryofjustice/modernisation-platform/scripts/account-deletion) and create a `config.txt` file within it -1. Open the [example-config.txt file](https://github.com/ministryofjustice/modernisation-platform/scripts/account-deletion/example-config.txt), copy its contents, and paste them into your newly created `config.txt` file. +1. Navigate to the [account-deletion folder](https://github.com/ministryofjustice/modernisation-platform/tree/main/scripts/account-deletion) and create a `config.txt` file within it +1. Open the [example-config.txt file](https://github.com/ministryofjustice/modernisation-platform/tree/main/scripts/account-deletion/example-config.txt), copy its contents, and paste them into your newly created `config.txt` file. 1. Modify your `config.txt` file to include variables specific to your AWS account deletion task. 1. Open your terminal and ensure your current working directory is `modernisation-platform/scripts/account-deletion`. 1. If required make the script executable by running the command: `chmod +x delete-accounts.sh` diff --git a/source/user-guide/creating-automated-terraform-documentation.html.md.erb b/source/user-guide/creating-automated-terraform-documentation.html.md.erb index 574c7bc6a..c3223198e 100644 --- a/source/user-guide/creating-automated-terraform-documentation.html.md.erb +++ b/source/user-guide/creating-automated-terraform-documentation.html.md.erb @@ -18,12 +18,12 @@ review_in: 6 months ## Overview -The Modernisation Platform use a [Documentation GitHub Workflow](https://github.com/ministryofjustice/modernisation-platform-terraform-loadbalancer/blob/initial-commit/.github/workflows/documentation.yml) to automate Terraform documentation. +The Modernisation Platform use a [Documentation GitHub Workflow](https://github.com/ministryofjustice/modernisation-platform/blob/main/.github/workflows/documentation.yml) to automate Terraform documentation. The workflow automatically searches then creates and populates tables using the variables, providers, modules, versions used in your Terraform code. ## Configuration to run documentation workflow -1) Ensure you have a copy of [documentation.yml](https://github.com/ministryofjustice/modernisation-platform-terraform-loadbalancer/blob/initial-commit/.github/workflows/documentation.yml) in your `.github/workflows/` directory. +1) Ensure you have a copy of [documentation.yml](https://github.com/ministryofjustice/modernisation-platform/blob/main/.github/workflows/documentation.yml) in your `.github/workflows/` directory. 2) Create a top-level file called README.md in your repository. diff --git a/source/user-guide/creating-environments.html.md.erb b/source/user-guide/creating-environments.html.md.erb index 0e6450644..0f33836fa 100644 --- a/source/user-guide/creating-environments.html.md.erb +++ b/source/user-guide/creating-environments.html.md.erb @@ -56,7 +56,7 @@ If required you can separate the permissions so that a different GitHub team is ### Access This is the level of access for the GitHub team to the Modernisation Platform. -A full list of permissions for the different access levels can be found [here](https://github.com/ministryofjustice/modernisation-platform/blob/main/terraform/environments/bootstrap/delegate-access/policies.tf) +A full list of permissions for the different access levels can be found [here](https://github.com/ministryofjustice/modernisation-platform/blob/main/terraform/environments/bootstrap/single-sign-on/policies.tf) ([previously within bootstrap/delegate-access](https://github.com/ministryofjustice/modernisation-platform/pull/6244)) The options are as follows: #### view-only diff --git a/source/user-guide/how-to-configure-dns.html.md.erb b/source/user-guide/how-to-configure-dns.html.md.erb index 28e16c140..f698f99c6 100644 --- a/source/user-guide/how-to-configure-dns.html.md.erb +++ b/source/user-guide/how-to-configure-dns.html.md.erb @@ -22,7 +22,7 @@ In order for users to access public facing services with a URL (Uniform Resource This will enable users to securely access services over HTTPS (Hypertext Transfer Protocol Secure). -There are two main ways to use certificates for DNS on the Modernisation Platform; [ACM](https://aws.amazon.com/certificate-manager/) (Amazon Certificate Manager) public certificates, and [Gandi.net](https://operations-engineering.service.justice.gov.uk/documentation/services/SSL-certificate-management.html) certificates imported into ACM. +There are two main ways to use certificates for DNS on the Modernisation Platform; [ACM](https://aws.amazon.com/certificate-manager/) (Amazon Certificate Manager) public certificates, and [Gandi.net](https://user-guide.operations-engineering.service.justice.gov.uk/documentation/services/sslcertmanage.html) certificates imported into ACM. Unless there is a good reason, ACM public certificates should be used as they are automatically managed and renewed. Gandi.net certificates cost more and require manual renewal. @@ -46,7 +46,7 @@ The following resources need to be created in different AWS accounts (see diagra Production environments should use a `service.justice.gov.uk` domain as per MoJ [naming domains](https://technical-guidance.service.justice.gov.uk/documentation/standards/naming-domains.html#naming-domains) guidance. -The Modernisation Platform will need to request the delegation of the application domain (eg `my-application.service.justice.gov.uk`) from the [Operations Engineering](https://operations-engineering.service.justice.gov.uk/documentation/services/domain-management.html#domain-management) team via an email to the [domains mailbox](mailto:domains@digital.justice.gov.uk) with the details of the records to be added to the `service.justice.gov.uk` domain and to discuss if the subdomain name meets the MoJ naming domains standard. Please contact the Modernisation Platform team in the [#ask-modernisation-platform](https://mojdt.slack.com/archives/C01A7QK5VM1) Slack channel to do this. +The Modernisation Platform will need to request the delegation of the application domain (eg `my-application.service.justice.gov.uk`) from the [Operations Engineering](https://user-guide.operations-engineering.service.justice.gov.uk/documentation/services/domainmgt.html) team via an email to the [domains mailbox](mailto:domains@digital.justice.gov.uk) with the details of the records to be added to the `service.justice.gov.uk` domain and to discuss if the subdomain name meets the MoJ naming domains standard. Please contact the Modernisation Platform team in the [#ask-modernisation-platform](https://mojdt.slack.com/archives/C01A7QK5VM1) Slack channel to do this. The Modernisation Platform team will then create a [hosted zone](https://github.com/ministryofjustice/modernisation-platform/blob/main/terraform/environments/core-network-services/route53.tf#L5) for your domain. Once this has been completed the following resources need to be created in different AWS accounts (see diagram above), the table details the resources and the AWS provider required for them. @@ -69,7 +69,7 @@ Non production environments should use ACM public certificate as detailed above ### Production environments -The Modernisation Platform will need to request the delegation of the application domain (eg `my-application.service.justice.gov.uk`) from the [Operations Engineering](https://operations-engineering.service.justice.gov.uk/documentation/services/domain-management.html#domain-management) team, along with a new Gandi.net certificate. Please contact the Modernisation Platform team in the [#ask-modernisation-platform](https://mojdt.slack.com/archives/C01A7QK5VM1) Slack channel to do this; to send an email to the [domains mailbox](mailto:domains@digital.justice.gov.uk) with the details of the records to be added to the `service.justice.gov.uk` domain and to discuss if the subdomain name meets the MoJ naming domains standard. +The Modernisation Platform will need to request the delegation of the application domain (eg `my-application.service.justice.gov.uk`) from the [Operations Engineering](https://user-guide.operations-engineering.service.justice.gov.uk/documentation/services/domainmgt.html) team, along with a new Gandi.net certificate. Please contact the Modernisation Platform team in the [#ask-modernisation-platform](https://mojdt.slack.com/archives/C01A7QK5VM1) Slack channel to do this; to send an email to the [domains mailbox](mailto:domains@digital.justice.gov.uk) with the details of the records to be added to the `service.justice.gov.uk` domain and to discuss if the subdomain name meets the MoJ naming domains standard. The Modernisation Platform team will then create a [hosted zone](https://github.com/ministryofjustice/modernisation-platform/blob/main/terraform/environments/core-network-services/route53.tf#L5) for your domain and a validation record for the Gandi.net certificate. Once this has been completed the following resources need to be created in different AWS accounts (see diagram above), the table details the resources and the AWS provider required for them. diff --git a/terraform/environments/README.md b/terraform/environments/README.md index 22e8007c3..62a379ff0 100644 --- a/terraform/environments/README.md +++ b/terraform/environments/README.md @@ -4,7 +4,7 @@ This directory creates and maintains organisational units, their accounts, and t ## Bootstrapping accounts -The subdirectory [bootstrap](bootstrap) enables bootstrapping resources in all accounts that are part of the Modernisation Platform, such as an IAM role for cross-account access and security implementations. It utilises terraform workspaces and has an [automated script](../../scripts/create-accounts.sh) to create accounts and bootstrap them as part of our CI/CD pipeline. +The subdirectory [bootstrap](bootstrap) enables bootstrapping resources in all accounts that are part of the Modernisation Platform, such as an IAM role for cross-account access and security implementations. It utilises terraform workspaces and has a [`new-environment.yml` workflow](https://github.com/ministryofjustice/modernisation-platform/blob/main/.github/workflows/new-environment.yml) to create accounts and bootstrap them as part of our CI/CD pipeline. ## State management diff --git a/terraform/github/README.md b/terraform/github/README.md index d5c9d0376..642926252 100644 --- a/terraform/github/README.md +++ b/terraform/github/README.md @@ -11,7 +11,7 @@ The state is stored in S3, as defined in [backend.tf](backend.tf). ## How to create a new repository for a terraform module -Say that we want to create a new repository for a terraform module named `bastion-linux`. We need to add the following section to [main.tf](main.tf) +Say that we want to create a new repository for a terraform module named `bastion-linux`. We need to add the following section to [repositories.tf](repositories.tf) ```terraform module "terraform-module-bastion-linux" { diff --git a/terraform/modernisation-platform-account/README.md b/terraform/modernisation-platform-account/README.md index 66bca4ebd..4097fa719 100644 --- a/terraform/modernisation-platform-account/README.md +++ b/terraform/modernisation-platform-account/README.md @@ -1,4 +1,4 @@ # Modernisation Platform - Modernisation Platform account These are resources that are implemented within the Modernisation Platform account. -For example, this includes an [backend definition](main.tf) that can be used for other Modernisation Platform-managed implementations. +For example, this includes a [backend definition](backend.tf) that can be used for other Modernisation Platform-managed implementations. From 40a10ccde66dba34f01e6f79bd73058a47e6ea3e Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Fri, 28 Jun 2024 15:16:24 +0100 Subject: [PATCH 7/9] re-write environments bootstraping for secure baselines --- .../environments/bootstrap/secure-baselines/README.md | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/terraform/environments/bootstrap/secure-baselines/README.md b/terraform/environments/bootstrap/secure-baselines/README.md index cf30e31eb..e046887f5 100644 --- a/terraform/environments/bootstrap/secure-baselines/README.md +++ b/terraform/environments/bootstrap/secure-baselines/README.md @@ -1,6 +1,13 @@ # Modernisation Platform: environments bootstrapping -This directory creates and maintains common resources that should be available in every account. It uses `terraform workspace` and replaces the [previous process for bootstrapping accounts](https://github.com/ministryofjustice/modernisation-platform/tree/5a8fd5c6/terraform/environments). +This directory configures use of [the `secure-baselines` Terraform module](https://github.com/ministryofjustice/modernisation-platform-terraform-baselines?tab=readme-ov-file#modernisation-platform-terraform-baselines-module), which creates and maintains common resources that should be available in every account. + +The `secure-baselines` module: +> _enables and configures the MoJ Security Guidance baseline for AWS accounts, alongside some extra reasonable security, identity and compliance services_ + +New environments can be created via the [new-environment.yml](https://github.com/ministryofjustice/modernisation-platform/blob/main/.github/workflows/new-environment.yml) workflow, which includes [a `secure-baselines` step](https://github.com/ministryofjustice/modernisation-platform/blob/main/.github/workflows/new-environment.yml#L258) that uses `terraform workspace` commands to call [a `setup-baselines.sh` bash script](https://github.com/ministryofjustice/modernisation-platform/blob/main/scripts/setup-baseline.sh). + +The processes here replaces the [previous process for bootstrapping accounts](https://github.com/ministryofjustice/modernisation-platform/tree/5a8fd5c6/terraform/environments). You need to run Terraform commands in this directory using a Ministry of Justice AWS organisational root IAM user that has permissions to `sts:AssumeRole`. It utilises the `OrganizationAccountAccessRole` created by AWS Organizations to assume a role in an account and bootstrap it with the following: @@ -34,4 +41,4 @@ terraform apply ## Running this in CI/CD -This repository includes a [script to automate this](https://github.com/ministryofjustice/modernisation-platform/tree/main/scripts/create-accounts.sh) for all new workspaces. +New environments can be created via the [new-environment.yml](https://github.com/ministryofjustice/modernisation-platform/blob/main/.github/workflows/new-environment.yml) workflow. From d0b261e6ced50cf98bf6ea7bb445d86b393e07a4 Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Fri, 28 Jun 2024 15:24:33 +0100 Subject: [PATCH 8/9] add cron schedule for checking broken links across repo --- .../workflows/{gh-pages-test-links.yml => test-url-links.yml} | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) rename .github/workflows/{gh-pages-test-links.yml => test-url-links.yml} (90%) diff --git a/.github/workflows/gh-pages-test-links.yml b/.github/workflows/test-url-links.yml similarity index 90% rename from .github/workflows/gh-pages-test-links.yml rename to .github/workflows/test-url-links.yml index 9eccf729f..3363e7d41 100644 --- a/.github/workflows/gh-pages-test-links.yml +++ b/.github/workflows/test-url-links.yml @@ -1,10 +1,12 @@ --- - name: check user guide links + name: check for broken links on: pull_request: paths: - "source/**" + schedule: + - cron: '3 7 * * TUE' permissions: {} jobs: From c1cae44cc12c991f37b56f3976e710746b135bfc Mon Sep 17 00:00:00 2001 From: Tom Webber Date: Mon, 1 Jul 2024 11:20:23 +0100 Subject: [PATCH 9/9] remove line-specific links to commit hashes --- source/concepts/environments/instance-scheduling.html.md.erb | 2 +- source/concepts/environments/security.html.md.erb | 2 +- source/concepts/sdlc/user-workflow.html.md.erb | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/source/concepts/environments/instance-scheduling.html.md.erb b/source/concepts/environments/instance-scheduling.html.md.erb index a7abe15ce..27b494d48 100644 --- a/source/concepts/environments/instance-scheduling.html.md.erb +++ b/source/concepts/environments/instance-scheduling.html.md.erb @@ -20,7 +20,7 @@ review_in: 6 months This feature automatically stops non-production EC2 and RDS instances overnight and over each weekend, in order to save on AWS costs and reduce environmental impact. Stopped instances don't incur charges, but Elastic IP addresses or EBS volumes attached to those instances do. -The instances will be automatically [stopped each weekday at 21:00](https://github.com/ministryofjustice/modernisation-platform/blob/19a7e48b366cfbb9d24c30f4620b12df886baa8e/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf#L35) and [started at 06:00 each weekday](https://github.com/ministryofjustice/modernisation-platform/blob/19a7e48b366cfbb9d24c30f4620b12df886baa8e/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf#L61) morning, which includes shut down on Friday night and startup on Monday morning. By default, this includes every EC2 and RDS instance in every non-production environment (development, test, preproduction) without requiring any configuration from the end user. Users can customise the default behaviour by attaching the `instance-scheduling` tag to EC2 and RDS instances with one of the following values: +The instances will be [automatically stopped each weekday at 21:00 and started at 06:00 each weekday morning](https://github.com/ministryofjustice/modernisation-platform/blob/main/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf), which includes shut down on Friday night and startup on Monday morning. By default, this includes every EC2 and RDS instance in every non-production environment (development, test, preproduction) without requiring any configuration from the end user. Users can customise the default behaviour by attaching the `instance-scheduling` tag to EC2 and RDS instances with one of the following values: - `default` - Automatically stop the instance overnight and start it in the morning. Absence of the `instance-scheduling` tag will have the same effect. - `skip-scheduling` - Skip auto scheduling for the instance diff --git a/source/concepts/environments/security.html.md.erb b/source/concepts/environments/security.html.md.erb index 115ad0035..bd4ba668b 100644 --- a/source/concepts/environments/security.html.md.erb +++ b/source/concepts/environments/security.html.md.erb @@ -39,7 +39,7 @@ We can see an overview of compliance across the Modernisation Platform and we wi ## Regional restrictions -We restrict the regional usage of accounts that sit within the Modernisation Platform. We use a [Service Control Policy (SCP)](https://github.com/ministryofjustice/aws-root-account/blob/1ec842ec7b1356898bbca9cdb55f7dc64a9b6643/management-account/terraform/organizations-policy-service-control.tf#L40) to do this. +We restrict the regional usage of accounts that sit within the Modernisation Platform. We use a [Service Control Policy (SCP)](https://github.com/ministryofjustice/aws-root-account/blob/main/management-account/terraform/organizations-policy-service-control.tf) to do this. In accordance with the [Security Guidance](https://ministryofjustice.github.io/security-guidance/baseline-aws-accounts/#regions), you should only use EU AWS regions. diff --git a/source/concepts/sdlc/user-workflow.html.md.erb b/source/concepts/sdlc/user-workflow.html.md.erb index 54247185e..64746df63 100644 --- a/source/concepts/sdlc/user-workflow.html.md.erb +++ b/source/concepts/sdlc/user-workflow.html.md.erb @@ -68,7 +68,7 @@ GitHub environments prevents any workflow being deployed to your environments un #### AWS IAM (Identity and Access Management) -Each account has a [`MemberInfrastructureAccess`](https://github.com/ministryofjustice/modernisation-platform/blob/ab3eb5a6a8e6253afc9db794362034ba4ae1cd94/terraform/environments/bootstrap/member-bootstrap/iam.tf#L266) IAM role that allows the GitHub workflows to create resources in each AWS account. +Each account has a [`MemberInfrastructureAccess`](https://github.com/ministryofjustice/modernisation-platform/blob/main/terraform/environments/bootstrap/member-bootstrap/iam.tf) IAM role that allows the GitHub workflows to create resources in each AWS account. Each application workflow will use the role for the relevant account, ensuring one account can't create resources in another. For modifying DNS entries, a `dns--` role is used. It is only capable of changes to the DNS hosted zone for your particular business unit.