Skip to content

Terraform module for provisioning Nebuly Platform resources on AWS.

License

Notifications You must be signed in to change notification settings

nebuly-ai/terraform-aws-nebuly-platform

Repository files navigation

Nebuly Platform (AWS)

Terraform module for provisioning Nebuly Platform resources on AWS.

Available on Terraform Registry.

Quickstart

⚠️ Prerequisite: before using this Terraform module, ensure that you have your Nebuly credentials ready. These credentials are necessary to activate your installation and should be provided as input via the nebuly_credentials input.

To get started with Nebuly installation on AWS, you can follow the steps below.

These instructions will guide you through the installation using Nebuly's default standard configuration with the Nebuly Helm Chart.

For specific configurations or assistance, reach out to the Nebuly Slack channel or email support@nebuly.ai.

1. Terraform setup

Import Nebuly into your Terraform root module, provide the necessary variables, and apply the changes.

For configuration examples, you can refer to the Examples.

Once the Terraform changes are applied, proceed with the next steps to deploy Nebuly on the provisioned Elastic Kubernetes Service (EKS) cluster.

Required IAM Policies

The following are the IAM policies required by the IAM users used to run the Terraform scripts:

  • AmazonRDSFullAccess
  • AmazonS3FullAccess:
  • AmazonEKSClusterPolicy
  • AmazonEKSServicePolicy
  • SecretsManagerReadWrite
  • CloudWatchFullAccess
  • AmazonVPCFullAccess
Required EKS Custom Policy
  {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateLaunchTemplate",
                "ec2:DeleteLaunchTemplate",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeLaunchTemplateVersions",
                "ec2:RunInstances",
                "kms:TagResource",
                "eks:*",
                "kms:CreateKey",
                "kms:CreateAlias",
                "kms:DeleteAlias",
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:CreatePolicy",
                "iam:DeletePolicy",
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:GetRolePolicy",
                "iam:DetachRolePolicy",
                "iam:DeleteRolePolicy"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

2. Connect to the EKS cluster

Prerequisites: install the AWS CLI.

  • Fetch the command for retrieving the credentials from the module outputs:
terraform output eks_cluster_get_credentials
  • Run the command you got from the previous step

3. Create image pull secret

The auto-generated Helm values use the name defined in the k8s_image_pull_secret_name input variable for the Image Pull Secret. If you prefer a custom name, update either the Terraform variable or your Helm values accordingly. Create a Kubernetes Image Pull Secret for authenticating with your Docker registry and pulling the Nebuly Docker images.

4. Bootstrap EKS cluster

Retrieve the auto-generated values from the Terraform outputs and save them to a file named values-bootstrap.yaml:

terraform output helm_values_bootstrap

Install the bootstrap Helm chart to set up all the dependencies required for installing the Nebuly Platform Helm chart on EKS.

Refer to the chart documentation for all the configuration details.

helm install oci://ghcr.io/nebuly-ai/helm-charts/bootstrap-aws \
  --namespace nebuly-bootstrap \
  --generate-name \
  --create-namespace \
  -f values-bootstrap.yaml

5. Create Secret Provider Class

Create a Secret Provider Class to allow EKS to fetch credentials from the provisioned Key Vault.

  • Get the Secret Provider Class YAML definition from the Terraform module outputs:

    terraform output secret_provider_class
  • Copy the output of the command into a file named secret-provider-class.yaml.

  • Run the following commands to install Nebuly in the Kubernetes namespace nebuly:

    kubectl create ns nebuly
    kubectl apply --server-side -f secret-provider-class.yaml

6. Install nebuly-platform chart

Retrieve the auto-generated values from the Terraform outputs and save them to a file named values.yaml:

terraform output helm_values

Install the Nebuly Platform Helm chart. Refer to the chart documentation for detailed configuration options.

helm install <your-release-name> oci://ghcr.io/nebuly-ai/helm-charts/nebuly-platform \
  --namespace nebuly \
  -f values.yaml \
  --timeout 30m 

ℹ️ During the initial installation of the chart, all required Nebuly LLMs are uploaded to your model registry. This process can take approximately 5 minutes. If the helm install command appears to be stuck, don't worry: it's simply waiting for the upload to finish.

7. Access Nebuly

Retrieve the external Load Balancer DNS name to access the Nebuly Platform:

kubectl get svc -n nebuly-bootstrap -o jsonpath='{range .items[?(@.status.loadBalancer.ingress)]}{.status.loadBalancer.ingress[0].ip}{"\n"}{end}'

You can then register a DNS CNAME record pointing to the Load Balancer DNS name to access Nebuly via the custom domain you provided in the input variable platform_domain.

Examples

You can find examples of code that uses this Terraform module in the examples directory.

Providers

Name Version
aws ~>5.45
random ~>3.6
tls ~>4.0

Outputs

Name Description
admin_user_password The password of the initial admin user of the platform.
admin_user_password_secret_name The name of the secret containing the password of the initial admin user of the platform.
analytics_db Details of the analytics DB hosted on an RDS instance.
analytics_db_credentials Credentials for connecting with the analytics DB.
auth_db Details of the auth DB hosted on an RDS instance.
auth_db_credentials Credentials for connecting with the auth DB.
auth_jwt_key_secret_name The name of the secret containing the SSL Key used for generating JWTs.
eks_cluster_endpoint Endpoint for EKS control plane.
eks_cluster_get_credentials Command for getting the credentials for accessing the Kubernetes Cluster.
eks_cluster_name Kubernetes Cluster Name.
eks_cluster_security_group_id Security group ids attached to the cluster control plane.
eks_iam_role_arn The ARN of the EKS IAM role.
eks_load_balancer_security_group The security group linked with the EKS load balancer.
eks_service_accounts The service accounts that will able to assume the EKS IAM Role.
helm_values The values.yaml file for installing Nebuly with Helm.

The default standard configuration is used, which uses Nginx as ingress controller and exposes the application to the Internet. This configuration can be customized according to specific needs.
helm_values_bootstrap The bootrap.values.yaml file for installing the Nebuly AWS Boostrap chart with Helm.
openai_api_key_secret_name The name of the secret storing the OpenAI API Key.
s3_bucket_ai_models The details of the bucket used as model registry for storing the AI Models
secret_provider_class The secret-provider-class.yaml file to make Kubernetes reference the secrets stored in the Key Vault.

Inputs

Name Description Type Default Required
allowed_inbound_cidr_blocks The CIDR blocks from which inbound connections will be accepted. Use 0.0.0.0/0 for allowing all inbound traffic map(string) n/a yes
create_security_group_rules If True, add to the specified security group the rules required for allowing connectivity between the provisioned services among all the specified subnets. bool false no
eks_cloudwatch_observability_enabled If true, install the CloudWatch Observability add-on.
The add-on installs the CloudWatch agent to send infrastructure metrics from the cluster,
installs Fluent Bit to send container logs, and also enables CloudWatch Application Signals
to send application performance telemetry.
bool false no
eks_cluster_admin_arns List of ARNs that will be granted the role of Cluster Admin over EKS set(string) [] no
eks_cluster_endpoint_public_access Indicates whether or not the Amazon EKS public API server endpoint is enabled. bool n/a yes
eks_enable_cluster_creator_admin_permissions Indicates whether or not to add the cluster creator (the identity used by Terraform) as an administrator via access entry. bool true no
eks_kubernetes_version Specify which Kubernetes release to use. string n/a yes
eks_managed_node_group_defaults The default settings of the EKS managed node groups.
object({
ami_type = string
block_device_mappings = map(any)
})
{
"ami_type": "AL2_x86_64",
"block_device_mappings": {
"sdc": {
"device_name": "/dev/xvda",
"ebs": {
"delete_on_termination": true,
"encrypted": true,
"volume_size": 128,
"volume_type": "gp3"
}
}
}
}
no
eks_managed_node_groups The managed node groups of the EKS cluster.
map(object({
instance_types = set(string)
min_size = number
max_size = number
desired_size = optional(number)
subnet_ids = optional(list(string), null)
ami_type = optional(string, "AL2_x86_64")
disk_size_gb = optional(number, 128)
tags = optional(map(string), {})
use_custom_launch_template = optional(bool, true)
labels = optional(map(string), {})
taints = optional(set(object({
key : string
value : string
effect : string
})), [])
}))
{
"gpu-a10": {
"ami_type": "AL2_x86_64_GPU",
"desired_size": 0,
"disk_size_gb": 128,
"instance_types": [
"g5.12xlarge"
],
"labels": {
"nebuly.com/accelerator": "nvidia-ampere-a10",
"nvidia.com/gpu.present": "true"
},
"max_size": 1,
"min_size": 0,
"tags": {
"k8s.io/cluster-autoscaler/enabled": "true"
},
"taints": [
{
"effect": "NO_SCHEDULE",
"key": "nvidia.com/gpu",
"value": ""
}
]
},
"gpu-t4": {
"ami_type": "AL2_x86_64_GPU",
"desired_size": 1,
"disk_size_gb": 128,
"instance_types": [
"g4dn.xlarge"
],
"labels": {
"nebuly.com/accelerator": "nvidia-tesla-t4",
"nvidia.com/gpu.present": "true"
},
"max_size": 1,
"min_size": 0,
"taints": [
{
"effect": "NO_SCHEDULE",
"key": "nvidia.com/gpu",
"value": ""
}
]
},
"workers": {
"desired_size": 1,
"instance_types": [
"r5.xlarge"
],
"max_size": 1,
"min_size": 1
}
}
no
eks_service_accounts The service accounts that will able to assume the EKS IAM Role.
list(object({
name : string
namespace : string
}))
[
{
"name": "aws-load-balancer-controller",
"namespace": "kube-system"
},
{
"name": "cluster-autoscaler",
"namespace": "kube-system"
},
{
"name": "cluster-autoscaler",
"namespace": "nebuly"
},
{
"name": "cluster-autoscaler",
"namespace": "nebuly-bootstrap"
},
{
"name": "aws-load-balancer-controller",
"namespace": "nebuly"
},
{
"name": "nebuly",
"namespace": "nebuly"
},
{
"name": "nebuly",
"namespace": "default"
}
]
no
k8s_image_pull_secret_name The name of the Kubernetes Image Pull Secret to use.
This value will be used to auto-generate the values.yaml file for installing the Nebuly Platform Helm chart.
string "nebuly-docker-pull" no
nebuly_credentials The credentials provided by Nebuly are required for activating your platform installation.
If you haven't received your credentials or have lost them, please contact support@nebuly.ai.
object({
client_id : string
client_secret : string
})
n/a yes
okta_sso Settings for configuring the Okta SSO integration.
object({
issuer : string
client_id : string
client_secret : string
})
null no
openai_api_key The API Key used for authenticating with OpenAI. string n/a yes
openai_endpoint The endpoint of the OpenAI API. string n/a yes
openai_gpt4_deployment_name The name of the deployment to use for the GPT-4 model. string n/a yes
platform_domain The domain on which the deployed Nebuly platform is made accessible. string n/a yes
rds_analytics_instance_type The instance type of the RDS instance hosting the analytics DB. string "db.m7g.xlarge" no
rds_analytics_storage Storage settings of the analytics DB.
object({
allocated_gb : number
max_allocated_gb : number
type : string
iops : optional(number, null)
})
{
"allocated_gb": 32,
"max_allocated_gb": 128,
"type": "gp3"
}
no
rds_auth_instance_type The instance type of the RDS instance hosting the auth DB. string "db.t4g.small" no
rds_auth_storage Storage settings of the auth DB.
object({
allocated_gb : number
max_allocated_gb : number
type : string
iops : optional(number, null)
})
{
"allocated_gb": 20,
"max_allocated_gb": 32,
"type": "gp2"
}
no
rds_availability_zone The availabilty zone of the RDS instances. string null no
rds_backup_retention_period The retention period, in days, of the daily backups. number 14 no
rds_backup_window Description: The daily time range (in UTC) during which automated backups are created if they are enabled. Example: '09:46-10:16'. Must not overlap with maintenance_window. string "03:00-06:00" no
rds_create_db_subnet_group n/a bool true no
rds_db_username The username to connect with the Postgres RDS databases. string "nebulyadmin" no
rds_deletion_protection If True, enable the deletion protection on the RDS istances. bool true no
rds_maintenance_window The window to perform maintenance in. Syntax: 'ddd:hh24:mi-ddd:hh24:mi'. Eg: 'Mon:00:00-Mon:03:00'. string "Mon:00:00-Mon:03:00" no
rds_multi_availability_zone_enabled If True, provision the RDS instances on multiple availability zones. bool true no
rds_postgres_family The PostgreSQL family to use for the RDS instances. string "postgres16" no
rds_postgres_version The PostgreSQL version to use for the RDS instances. string "16" no
region The region where to provision the resources. string n/a yes
resource_prefix The prefix that will be used for generating resource names. string n/a yes
secrets_suffix The suffix that will be appended to the secrets created in AWS Secrets Store. Useful to avoid
name collisions.

If null, an auto-generated random suffix will be used.
If empty string, no suffix will be used.
string null no
security_group The security group to use.
object({
name = string
id = string
})
n/a yes
subnet_ids The IDs of the subnets to attach to the Platform resources. set(string) n/a yes
tags Common tags that will be applied to all resources. map(string) {} no
vpc_id The ID of the VPC to use. string n/a yes

Resources

  • resource.aws_iam_role_policy_attachment.ai_models__eks_reader (/terraform-docs/main.tf#517)
  • resource.aws_s3_bucket.ai_models (/terraform-docs/main.tf#513)
  • resource.aws_secretsmanager_secret.admin_user_password (/terraform-docs/main.tf#382)
  • resource.aws_secretsmanager_secret.auth_jwt_key (/terraform-docs/main.tf#365)
  • resource.aws_secretsmanager_secret.nebuly_credentials (/terraform-docs/main.tf#473)
  • resource.aws_secretsmanager_secret.okta_sso_credentials (/terraform-docs/main.tf#489)
  • resource.aws_secretsmanager_secret.openai_api_key (/terraform-docs/main.tf#462)
  • resource.aws_secretsmanager_secret.rds_analytics_credentials (/terraform-docs/main.tf#139)
  • resource.aws_secretsmanager_secret.rds_auth_credentials (/terraform-docs/main.tf#228)
  • resource.aws_secretsmanager_secret_version.admin_user_password (/terraform-docs/main.tf#390)
  • resource.aws_secretsmanager_secret_version.auth_jwt_key (/terraform-docs/main.tf#373)
  • resource.aws_secretsmanager_secret_version.nebuly_credentials (/terraform-docs/main.tf#480)
  • resource.aws_secretsmanager_secret_version.okta_sso_credentials (/terraform-docs/main.tf#498)
  • resource.aws_secretsmanager_secret_version.openai_api_key (/terraform-docs/main.tf#469)
  • resource.aws_secretsmanager_secret_version.rds_analytics_password (/terraform-docs/main.tf#146)
  • resource.aws_secretsmanager_secret_version.rds_auth_password (/terraform-docs/main.tf#235)
  • resource.aws_security_group.eks_load_balancer (/terraform-docs/main.tf#398)
  • resource.aws_security_group_rule.allow_all_inbound_within_vpc (/terraform-docs/main.tf#436)
  • resource.aws_security_group_rule.allow_all_outbound_within_vpc (/terraform-docs/main.tf#447)
  • resource.aws_vpc_security_group_ingress_rule.eks_load_balancer_allow_http (/terraform-docs/main.tf#425)
  • resource.aws_vpc_security_group_ingress_rule.eks_load_balancer_allow_https (/terraform-docs/main.tf#416)
  • resource.random_password.admin_user_password (/terraform-docs/main.tf#378)
  • resource.random_password.rds_analytics (/terraform-docs/main.tf#134)
  • resource.random_password.rds_auth (/terraform-docs/main.tf#223)
  • resource.random_string.secrets_suffix (/terraform-docs/main.tf#26)
  • resource.tls_private_key.auth_jwt (/terraform-docs/main.tf#361)
  • data source.aws_partition.current (/terraform-docs/main.tf#19)
  • data source.aws_subnet.subnets (/terraform-docs/main.tf#20)

About

Terraform module for provisioning Nebuly Platform resources on AWS.

Resources

License

Stars

Watchers

Forks

Packages

No packages published