AWS Perspective, is a solution that makes it easy for customers to create visualizations of their AWS cloud workloads. Perspective maintains an inventory of AWS resources across accounts and regions, derives relationships between them and makes them available via its console. Customers can build detailed architecture diagrams of their workloads that they can customize and share, with the knowledge that the data is always up to date.
To find out more about AWS Perspective visit our AWS Solutions page.
To see our roadmap and vote on the features you would like to see implemented, please go to our project board
Region | Launch | Template Link |
---|---|---|
US East (N. Virginia) (us-east-1) | Launch | Link |
US East (Ohio) (us-east-2) | Launch | Link |
US West (Oregon) (us-west-2) | Launch | Link |
Asia Pacific (Mumbai) (ap-south-1) | Launch | Link |
Asia Pacific (Seoul) (ap-northeast-2) | Launch | Link |
Asia Pacific (Singapore) (ap-southeast-1) | Launch | Link |
Asia Pacific (Sydney) (ap-southeast-2) | Launch | Link |
Asia Pacific (Tokyo) (ap-northeast-1) | Launch | Link |
Canada (Central) (ca-central-1) | Launch | Link |
Europe (Ireland) (eu-west-1) | Launch | Link |
Europe (London) (eu-west-2) | Launch | Link |
Europe (Frankfurt) (eu-central-1) | Launch | Link |
If you have an idea for a feature you would like to see implemented, please create an issue here and use the 'enhancement' label. This will be available on the project board for others to vote on.
The solution is available as an AWS CloudFormation template and should take about 30 minutes to deploy. See the deployment guide on the AWS Solutions site, for one-click deployment instructions, and the cost overview guide to learn about costs.
The solution provides a web user interface.
See the user guide on the AWS Solutions site to learn how to use the solution.
The AWS CloudFormation template deploys six components to maintain an inventory of AWS resources and display the relationships between them. Amazon CloudFront delivers content for the Web UI component. The UI is written in React and hosted from an Amazon Simple Storage Service (Amazon S3) bucket (WebUIBucket) and Lambda@Edge appends secure headers to each request. AWS Amplify aids the API integration and provides an abstraction layer for communicating with Amazon S3 (AmplifyStorageBucket) to manage storage actions. Amazon Cognito authenticates users and the Amazon API Gateway Client API (PerspectiveWebRestAPI) provides access to relationship data. AWS AppSync
An Amazon Virtual Private Cloud (Amazon VPC) contains the data and discovery components. The Lambda function (GremlinFunction) processes requests from API Gateway Client API (PerspectiveWebRestAPI) and queries Amazon Neptune and the cost component to gather the requested data. The API Gateway Server API (ServerGremlinAPI ) receives requests from the AWS Fargate task in the discovery component. The Lambda function (ElasticsearchFunction) processes incoming requests and communicates with the Amazon Elasticsearch Service (ES) cluster. Amazon Elastic Container Service (Amazon ECS) runs an AWS Fargate task using the container image (using Docker) downloaded from Amazon Elastic Container Registry (Amazon ECR). AWS Config is used to gather the data about resources running in each account and Region that is made discoverable to AWS Perspective. AWS API calls are used to gather data about resources that are not currently supported by AWS Config.
An AWS Cost and Usage Report is published to the Amazon S3 bucket (PerspectiveCostBucket). When the new Amazon S3 object is uploaded, it triggers the Cost Parser Lambda function. An Amazon DynamoDB global table stores data for the cost component. Lastly, AWS CodePipeline and AWS CodeBuild build the container image from the code hosted in the Amazon S3 bucket (DiscoveryBucket).
cd ./deployment
chmod +x ./run-unit-tests.sh
./run-unit-tests.sh
cd ./deployment
chmod +x ./build-s3-dist.sh solutions-bucket aws-perspective v1.0.0 image-tag
./build-s3-dist.sh
When you have made changes to the code, you can build it locally and upload the deployment artefacts to Amazon S3 by running the following bash script.
- AWS CLI installed.
- The CLI configured with credentials/profile that will allow:
- S3 Bucket creation
- S3 Object creation
- Create a shell script in the root project directory.
touch local-deploy-script.sh
- Copy the contents below and paste in local-deploy-script.sh and save.
#!/usr/bin/env bash
set -e
# The Region you wish to deploy to.
AWS_REGION=<aws-region>
# The S3 Bucket name to be created to store your deployment artefacts
DIST_OUTPUT_BUCKET=<s3-bucket>
# A name for your test solution
SOLUTION_NAME=<solution-name>
# A version number for this test release e.g vX.Y.Z
VERSION=<version>
# Tag that will be given to Docker image.
IMAGE_TAG=<tag>
if aws s3api head-bucket --bucket "${DIST_OUTPUT_BUCKET}" 2>/dev/null;
then
echo "${DIST_OUTPUT_BUCKET} bucket exists and you own it, so not creating it"
else
echo "creating bucket in region ${AWS_REGION} with name ${DIST_OUTPUT_BUCKET}"
aws s3 mb s3://${DIST_OUTPUT_BUCKET} --region ${AWS_REGION}
fi
if aws s3api head-bucket --bucket "${DIST_OUTPUT_BUCKET}-${AWS_REGION}" 2>/dev/null;
then
echo "${DIST_OUTPUT_BUCKET}-${AWS_REGION} bucket exists and you own it, so not creating it"
else
echo "creating bucket in region ${AWS_REGION} with name ${DIST_OUTPUT_BUCKET}-${AWS_REGION}"
aws s3 mb s3://${DIST_OUTPUT_BUCKET}-${AWS_REGION} --region ${AWS_REGION}
fi
cd deployment
./build-s3-dist.sh $DIST_OUTPUT_BUCKET $SOLUTION_NAME $VERSION $IMAGE_TAG
aws cloudformation package --template-file "global-s3-assets/perspective-setup.template" --s3-bucket "$DIST_OUTPUT_BUCKET" --s3-prefix "${SOLUTION_NAME}/${VERSION}" --output-template-file packaged.template
aws s3 cp packaged.template "s3://${DIST_OUTPUT_BUCKET}-${AWS_REGION}/${SOLUTION_NAME}/${VERSION}/aws-perspective.template"
aws s3 cp global-s3-assets s3://${DIST_OUTPUT_BUCKET}-${AWS_REGION}/${SOLUTION_NAME}/${VERSION}/ --recursive --acl bucket-owner-full-control
aws s3 cp regional-s3-assets s3://${DIST_OUTPUT_BUCKET}-${AWS_REGION}/${SOLUTION_NAME}/${VERSION}/ --recursive --acl bucket-owner-full-control
echo "You can now deploy using this template URL https://${DIST_OUTPUT_BUCKET}-${AWS_REGION}.s3.${AWS_REGION}.amazonaws.com/${SOLUTION_NAME}/${VERSION}/aws-perspective.template"
- Make the script executable
chmod +x ./local-deploy-script.sh
- Run the script
./local-deploy-script.sh
This will:
- Create S3 buckets to store the deployment artefacts.
- Run the build
- Deploy artefacts to your chosen S3 Bucket.
Once you have the deployment artefacts in S3, you can deploy the aws-perspective.template in the CloudFormation console. Just pass the link to the template in S3 to CloudFormation and it will do the rest.
Parameters required by the template:
- Stack Name - The name given to the deployment stack e.g. aws-perspective
- AdminUserEmailAddress - The email address to receive login credentials at.
- AlreadyHaveConfigSetup - Yes/No depending on whether AWS Config has is configured in the deployment Region.
- CreateElasticsearchServiceRole - Yes/No depending on whether you already have this service-role created. You can check in the IAM console to see if it is provisioned.
- OptOutOfSendingAnonymousUsageMetrics - Yes/No depending on whether you are happy to send anonymous usage metrics back to AWS.
- CreateNeptuneReplica - Yes/No depending on whether you want a read-replica created for Amazon Neptune. Note, that this will increase the cost of running the solution.
- NeptuneInstanceClass - Select from a range of instance types that will be provisioned for the Amazon Neptune database. Note, the selection could increase the cost associated with running the solution.
Note - You will need to deploy in the same account and region as the S3 bucket.
|-deployment/
|-build-s3-dist.sh [ shell script for packaging distribution assets ]
|-run-unit-tests.sh [ shell script for executing unit tests ]
|-perspective-setup.yaml [ the main CloudFormation deployment template ]
|-source/
|-frontend/ [ the frontend ui code ]
|-backend/ [ the backend code ]
|-discovery/ [ the code for the discovery process ]
|-functions/ [ the code for the Lambda functions ]
|-cfn/ [ the CloudFormation templates that deploy aws-perspective ]
The Web API requires a JWT present in the request Authorization Header. You can find your Bearer Token by:
- Logging into AWS Perspective UI in Google Chrome
- Right-click anywhere on the screen.
- Click Inspect
- Click Network
- Find the resources request.
- Select it and go to Headers
- Locate the Authorization Header
- Copy the contents.
curl --location --request POST 'https://<your-api-gateway-id>.execute-api.<deployment-region>.amazonaws.com/Prod/resources' \
--header 'Authorization: Bearer <your-token>' \
--header 'Content-Type: application/json' \
--data-raw '{"command":"getAllResources","data":{}}'
You will receive all the resources that have been discovered with just a subset of data about each one. You will also receive a metadata object that breaks down the resource types discovered and the resource counts for each. This is done for each account and region that is discoverable to AWS Perspective.
curl --location --request GET 'https://<your-api-gateway-id>.execute-api.<deployment-region>.amazonaws.com/Prod/resources?command=linkedNodesHierarchy&id=<node-id>' \
--header 'Authorization: Bearer <your-token>'
You will receive an array of nodes that have a relationship with the node id used in the request.
This takes a JSON representation of the architecture diagram and converts it to mxGraph and opens in a DrawIO tab.
curl --location --request POST 'https://<your-api-gateway-id>.execute-api.<deployment-region>.amazonaws.com/Prod/resources' \
--header 'Authorization: Bearer <your-token> \
--header 'Content-Type: text/plain' \
--data-raw '{"elements":{"nodes":[], "edges": []'}}
You will receive a URL that when clicked will open up DrawIO in the browser and show your graph.
Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at
http://www.apache.org/licenses/
or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License.