TCP to BLOB is a kubernetes service that provides a TCP endpoint to receive Azure Orbital Ground Station (AOGS) satellite downlink data and persists it in Azure BLOB Storage. This doc shows you how to deploy this architecture into Azure using the resources and code in this repository.
- Vnet with subnets including:
pod-subnet
: Where AKS TCP to BLOB instances will listen for TCP connections.orbital-subnet
: Delegated to Azure Orbital from which the service can send contact data to TCP to BLOB endpoint.
- Azure Container Registry.
- AKS cluster.
- Storage account and container for storing raw Orbital contact data.
- TCP to BLOB AKS service that listens for Orbital contact TCP connection and persists the TCP data to Azure BLOB storage.
- Orbital Contact profile configured with the appropriate endpoint and subnet for TCP to BLOB service.
- ADO Dashboard providing temporal view of TCP to BLOB activity and AKS cluster health.
server-init
: Server starting up.socket-connect
: Client socket connected to server. (1 per socket)cleanup
: Purge un-needed resources. (<n>
per socket)socket-data
: Process client data sent to socket. (<n>
per socket)socket-error
: Error prior to completion. Terminates socket and initiatessocket-close
event. (0 or 1 per socket)socket-close
: All data has been received and written to file if applicable. Attempt uploading to BLOB. (1 per socket)complete
: Final event providing success/failure summary. (1 per socket)
A NodeJS project is defined by package.json.
When you run yarn <cmd>
(or alternatively npm run <cmd>
), yarn
/npm
finds the in the scripts
of
package.json to determine what script are available.
To avoid requiring yarn to be installed globally, yarn
is installed with npm
at the root level (azure-orbital-integration
).
When npx yarn
is used throughout the examples, it uses the yarn
installed in azure-orbital-integration/node_modules
.
From azure-orbital-integration
project root directory, run:
./install-node-modules.sh && npx yarn build
cd tcp-to-blob && mkdir .env
cp ./deploy/env-template.sh .env/env-<name_prefix>.sh
- Edit your env file as needed. See: "Environment variables" section below.
Required:
NAME_PREFIX
: Used to make name for resources to create. e.g AKS cluster, vnet, etc.
Optional:
AZ_LOCATION
: Location where resources will be deployed. Should match spacecraft location. default:"westus2"
.AZ_RESOURCE_GROUP
: Resource group containing your AKS service.ACR_NAME
: Name of your Azure Container Registry.CONTACT_DATA_STORAGE_ACCT
: Name of storage account where BLOBs will be created (containing data sent to TCP endpoint).CONTACT_DATA_STORAGE_CONTAINER
: Name of storage container for saving BLOBs. default:"raw-contact-data"
RAW_DATA_FILE_PATH
: Path to local file containing sample raw data to be uploaded to a BLOB where it can be used by the raw data canary.RAW_DATA_BLOB_NAME
: Name of BLOB within$CONTACT_DATA_STORAGE_CONTAINER
which will be streamed by raw data canary to TCP to BLOB endpoint. You may either upload this reference data usingnpx yarn upload-raw-reference-data
, manually (consider usingreference-data/
prefix) or schedule a contact and reference the BLOB associated with the results.CONTACT_DATA_STORAGE_CONNECTION_STRING
: (Sensitive) Connection string for contact data storage. Grantstcp-to-blob
the ability to create the storage container if needed and create/write to BLOBs. This is stored as an AKS secret which is exposed as an environment variable in thetcp-to-blob
container. You may use either:- Storage BLOB connection string: (default) Long living credentials for accessing storage container. This gets populated automatically if
CONTACT_DATA_STORAGE_CONNECTION_STRING
is not already set. - SAS connection string: Enables you to or the party to which you are delivering contact data, to specify duration and other fine-grained access characteristics. Consider using this if the data recipient (team managing/owning storage account and processing data) is not the same team as the Orbital subscription owner. Things to consider for SAS:
- It is the responsibility of the storage account owner to create the SAS since it's not auto-created during TCP to BLOB deployment.
- Storage account owner must coordinate with TCP to BLOB AKS cluster owner to rotate the
CONTACT_DATA_STORAGE_CONNECTION_STRING
AKS secret otherwise TCP to BLOB will fail to write to blob storage upon SAS expiration.
- Storage BLOB connection string: (default) Long living credentials for accessing storage container. This gets populated automatically if
AKS_NAME
: Name of AKS cluster.AKS_VNET
: default:"${AKS_CLUSTER_NAME}-vnet"
AKS_NUM_REPLICAS
: default: 2HOST
: default: "0.0.0.0".PORT
: default: 50111NUM_BLOCK_LOG_FREQUENCY
: default: 4000SOCKET_TIMEOUT_SECONDS
: Seconds of socket inactivity until socket is destroyed. default: 120AKS_VNET_ADDR_PREFIX
: default: "10.0.0.0/8"AKS_VNET_SUBNET_ADDR_PREFIX
: Subnet for AKS nodes. default: "10.240.0.0/16"LB_IP
: IP address for the internal load balancer Orbital will hit. Should be in vnet IP range. default: " 10.240.11.11"AKS_POD_SUBNET_ADDR_PREFIX
: Subnet for AKS pods. default: "10.241.0.0/16"AKS_ORBITAL_SUBNET_ADDR_PREFIX
: Subnet delegated to Orbital. default: "10.244.0.0/16"CANARY_NUM_LINES
: Number of lines of text canary will send to TCP to BLOB. default: 65000DEMODULATION_CONFIG
: Raw XML or named modem for the contact profile downlink demodulation configuration. default: "aqua_direct_broadcast"
Recommend creating a tcp-to-blob/.env/env-${stage}.sh
to set these and re-load env as needed without risking
committing them to version control.
We have prepared a docker file, tcp-to-blob/deploy/Dockerfile_deployer
, with all prerequisites needed for deploying.
git clone https://github.com/Azure/azure-orbital-integration.git
cd azure-orbital-integration/tcp-to-blob
docker build . -f deploy/Dockerfile_deployer -t orbital-integration-deployer
NAME_PREFIX=<desired_name_prefix>
Set prefix for names of resources to be deployed.docker run -it -e NAME_PREFIX="$NAME_PREFIX" orbital-integration-deployer:latest
- The command above will bring you to a container shell. In container shell:
az login
az account set -s <your_subscription>
git pull
- (optional) Update and source .env/env-template.sh if desired.
- See Environment Variables section above.
- You can choose between setting and passing env variables via
docker run -it -e
or running docker then creating and sourcing your env file in the running container.
./tcp-to-blob/deploy/install-and-deploy.sh
Consider using Bash on Azure Cloud Shell which meets all prerequisites without the need to install anything on your computer.
- Mac OR Unix-like environment with Bash compatible shell.
- NodeJS LTS (16 or later) - Type
node version
to check version. - Azure subscription access.
- Azure CLI - Type
az
oraz -h
for Azure info. - AKS CLI:
az aks install-cli
. The deployment scripts use kubectl (not AKS CLI) but it's probably safest to use thekubectl
that comes with the AKS CLI.- Type
kubectl
for information. - If a warning/error shows up that looks like the PATH variable isn't set correctly, install kubectl.
- Type
- (optional) Docker - Type
docker
for information.
git clone https://github.com/Azure/azure-orbital-integration.git
- If using Azure Cloud Shell, you are already logged in. Otherwise, Ensure logged in to Azure CLI and default subscription is set.
- From
azure-orbital-integration
directory:./install-node-modules.sh && npx yarn build
cd tcp-to-blob
- Create
.env/env-<name_prefix>.sh
environment file as described above. source ./.env/env-<name_prefix>.sh
- It should look like nothing happened in the terminal; this is GOOD.- Deploy (to AZ CLI's current subscription):
./deploy/bicep/deploy.sh
If you receive an 'Authorization failed' error, you may not have proper access to the subscription. - Update generated contact profile if desired. Defaults to Aqua with "aqua_direct_broadcast" named demodulation configuration.
- Open Azure Portal and navigate to Orbital Service.
- Navigate to Contact Profiles (left-side panel).
- Select the generated contact profile (default name is
${NAME_PREFIX}-aks-cp
).
If you wish to utilize an existing ACR and Storage container:
- Update your
.env/env-<name_prefix>.sh
to include:- ACR info:
ACR_NAME
andACR_RESOURCE_GROUP
- Storage account info:
CONTACT_DATA_STORAGE_ACCT
,CONTACT_DATA_STORAGE_ACCT_RESOURCE_GROUP
and optionallyCONTACT_DATA_STORAGE_CONNECTION_STRING
(consider SAS connection string). - Resource group for other generated resources:
AZ_RESOURCE_GROUP
- ACR info:
./deploy/bicep/deploy-core.sh && ./deploy/az-cli/deploy-service-and-dashboards.sh
- Note: An Azure CLI
deploy.sh
script is available in./deploy/az-cli
for reference. However, the./deploy/bicep
scripts are the most up-to-date and complete deployment mechanism.
- Ensure logged in to Azure CLI.
- Open a new bash-like terminal shell.
. .env/env-<name_prefix>.sh && npx yarn az-login && . ./deploy/env-defaults.sh
- Ensure docker is running.
- Login/switch environments (once every few hours or per env session).
npx yarn run-text-canary
- View AKS logs as described below.
- Verify BLOB matching
filename
was created in your storage container.
- Ensure docker is running
- Login/switch environments (once every few hours or per env session).
npx yarn deploy-text-canary-cron
- View AKS logs as described below.
- See
RAW_DATA_BLOB_NAME
env variable above to make raw data available for canary to read and stream to TCP to BLOB. - Ensure docker is running.
- Login/switch environments (once every few hours or per env session).
- Run
npx yarn docker-push-raw-canary
to push the imagetcp-to-blob-raw-canary
to ACR. npx yarn run-raw-canary
- View AKS logs as described below.
- Verify BLOB matching
filename
was created in your storage container.
- See
RAW_DATA_BLOB_NAME
env variable above to make raw data available for canary to read and steam to TCP to BLOB. - Ensure docker is running
- Login/switch environments (once every few hours or per env session).
npx yarn deploy-raw-canary-cron
- View AKS logs as described below.
- Login/switch environments (once every few hours or per env session).
cd tcp-to-blob
node ./dist/src/tcp-to-blob.js
- Ensure docker is running.
- Login/switch environments (once every few hours or per env session).
- Build image:
npx yarn docker-build
- Start container:
npx yarn docker-run
docker ps
Note of Container ID.docker kill <Conatiner ID>
Or run npx yarn docker-kill-all
(instead of 1 & 2)
- Login to Azure Portal.
- Select the tenant where TCP to BLOB is deployed.
- Either navigate to Shared Dashboards or to your resource group (
AZ_RESOURCE_GROUP
). - Open the dashboard named
${NAME_PREFIX}-dashboard
. - Click 'go to dashboard'
- Navigate to your service in AKS portal.
- Click on the "Logs" link on the left-hand menu.
- Click "Container Logs" and then "Run" or "Load to Editor" within the "Find a value in Container Logs Table" card.
Filter recent activity:
let FindString = "";
ContainerLog
| where LogEntry has FindString
// | where not(LogEntry has "No socket data.")
| extend _data = parse_json(LogEntry)
| where not(_data.event == "socket-data")
| sort by TimeGenerated desc
| extend senderIP = tostring(_data.remoteHost)
| extend sender=tostring(case(senderIP startswith "10.241", "canary", senderIP startswith "10.244", "orbital", "unknown"))
| project TimeGenerated, senderIP, sender, subsystem=tostring(_data.subsystem), event=tostring(_data.event), message=tostring(_data.message), filename=tostring(_data.filename), mb=todouble(_data.fileSizeInKB)/1000, seconds=todouble(_data.durationInSeconds), containerName=tostring(_data.containerName), error=tostring(_data.error)
// | where event == "complete"
// | where subsystem == "tcp-to-blob"
// | where subsystem == "tcp-to-blob-text-canary"
// | where TimeGenerated between((approxTime - timeOffset) .. (approxTime + timeOffset))
View recently completed BLOBs:
ContainerLog
| where LogEntry has FindString
| extend _data = parse_json(LogEntry)
| sort by TimeGenerated desc
| extend senderIP = tostring(_data.remoteHost)
| extend sender=tostring(case(senderIP startswith "10.241", "canary", senderIP startswith "10.244", "orbital", "unknown"))
| project TimeGenerated, senderIP, sender, subsystem=tostring(_data.subsystem), event=tostring(_data.event), message=tostring(_data.message), filename=tostring(_data.filename), mb=todouble(_data.fileSizeInKB)/1000, seconds=todouble(_data.durationInSeconds), containerName=tostring(_data.containerName), error=tostring(_data.error)
| where event == "complete"
| where subsystem == "tcp-to-blob"\
View activity around the time of a contact:
let approxTime = todatetime('2022-07-15T02:34:11.969Z');
let timeOffset = 2m;
ContainerLog
| extend _data = parse_json(LogEntry)
| where not(_data.event == "socket-data")
| sort by TimeGenerated desc
| extend senderIP = tostring(_data.remoteHost)
| extend sender=tostring(case(senderIP startswith "10.241", "canary", senderIP startswith "10.244", "orbital", "unknown"))
| project TimeGenerated, senderIP, sender, subsystem=tostring(_data.subsystem), event=tostring(_data.event), message=tostring(_data.message), filename=tostring(_data.filename), mb=todouble(_data.fileSizeInKB)/1000, seconds=todouble(_data.durationInSeconds), containerName=tostring(_data.containerName), error=tostring(_data.error)
// | where event == "complete"
// | where subsystem == "tcp-to-blob"
// | where subsystem == "tcp-to-blob-text-canary"
// | where filename == FindString
| where TimeGenerated between((approxTime - timeOffset) .. (approxTime + timeOffset))
kubectl delete -f ./dist/env/${NAME_PREFIX}/tcp-to-blob.yaml
This deployment guide is a work in progress on deploying everything necessary for tcp-to-blob to run properly in Azure from ADO pipelines.
There are two pipelines in this approach and currently one manual step. The manual step is creating a service connection (SC) for the Azure Container Registry (ACR). The ACR must be created before the SC is created. Another step that could be a manual process is making sure the service connection principal has the owner role assigned at the subscription level (details below) the build agent will conduct all operations under the context of this service principal.
After the prereq pipeline has successfully deployed and a SC has been created the azure-pipeline can be run. This pipeline creates Azure Kubernetes Service (AKS), installs dependencies, builds the tcp-to-blob project, builds the docker image and pushes the image to our ACR created in step 1.
-
Create Azure Resource Manager Service Connection and assign ownership role at the subscription level. Check with your ADO administrator to see if this is already created as it is not visible for all users.
- Please see the following documentation for creating the service connection.
- Please see the following documentation regarding assigning role to a service principal. This will be the service principal that the build agent is using.
-
Create a variable group that your pipeline can reference. This var group sets environment variables for different build steps. Values needed:
- For ease name the variable group tcp-to-blob-vg. This is the name the build yaml files are expecting.
-
ACR_NAME: $(ORG_NAME)$(AZ_LOCATION)acr
CONTACT_DATA_STORAGE_ACCT: $(ORG_NAME)$(AZ_LOCATION)
CONTACT_DATA_STORAGE_CONNECTION_STRING_SECRET_KEY: $(ORG_NAME)-$(AZ_LOCATION)-storage
AZ_LOCATION: westus2
AZ_RESOURCE_GROUP: $(ORG_NAME)-test
AZ_SUBSCRIPTION: {This is the name of the Azure Resource Manager Service Connection created in step 1}
NAME_PREFIX: $(ORG_NAME)-$(AZ_LOCATION)
ORG_NAME: aoi
-
Create and/or run pipeline pointing to azure-resources/prereq-tcp-to-blob/prereq-azure-pipeline.yml
-
Create New Docker Registry Service Connection to correspond to the newly created ACR name it aoi-acr-sc for the pipeline script to reference properly.
-
Create and/or run pipeline pointing to ./azure-pipelines.yml
- First run of the pipeline you will need to grant permission for your service connection
- If this pipeline has been previously ran and resources exist another run will fail because the role assignment already exists. If another run is needed first delete the aoi-westus2-aks-agentpool acr pull role assignment.
Copyright © 2022 Microsoft. This Software is licensed under the MIT License. See LICENSE in the project root for more information.