Besides the control plane setup described in the general installation guide, each of our resources have a data plane component, which basically needs permissions to read and/or write to Pub/Sub. Herein, we show the steps needed to configure such Pub/Sub enabled Service Account.
-
Create a Google Cloud project and install the
gcloud
CLI and rungcloud auth login
. This sample will use a mix ofgcloud
andkubectl
commands. The rest of the sample assumes that you've set the$PROJECT_ID
environment variable to your Google Cloud project id, and also set your project ID as default usinggcloud config set project $PROJECT_ID
. -
Enable the
Cloud Pub/Sub API
on your project:gcloud services enable pubsub.googleapis.com
Create a Google Cloud Service Account to interact with Pub/Sub
In general, we would just need permissions to receive messages
(roles/pubsub.subscriber
). However, in the case of the Channel
, we would
also need the ability to publish messages (roles/pubsub.publisher
).
-
Create a new Google Cloud Service Account (GSA) named
events-sources-gsa
with the following command:gcloud iam service-accounts create events-sources-gsa
Depending on the use case, you may want to have a single GSA for all the sources, like the example above, or setup multiple similar service accounts. If you are setting up multiple GSAs you can follow the same set of steps, but take note to configure authentication only for the namespaces that are intended for each account.
Additionally, while it is possible to use the same GSA for both the broker and the sources, we recommend creating a dedicated Google Service Account for the broker data plane (e.g.
events-broker-gsa
):gcloud iam service-accounts create events-broker-gsa
-
Give that Service Account the necessary permissions on your project.
In this example, and for the sake of simplicity, we will just grant
roles/pubsub.editor
privileges to the Service Account, which encompasses both of the above plus some other permissions. Note that if you prefer finer-grained privileges, you can just grant the ones mentioned above.gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:events-sources-gsa@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/pubsub.editor
Note: If you are going to use metrics and tracing to track your resources, you also need
roles/monitoring.metricWriter
for metrics functionality:gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:events-sources-gsa@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/monitoring.metricWriter
and
roles/cloudtrace.agent
for tracing functionality:gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:events-sources-gsa@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/cloudtrace.agent
The same set of permissions should also be assigned to the broker data plane account
events-broker-gsa@$PROJECT_ID.iam.gserviceaccount.com
.gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:events-broker-gsa@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/pubsub.editor gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:events-broker-gsa@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/monitoring.metricWriter gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:events-broker-gsa@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/cloudtrace.agent
For the broker data plane configuration using events-broker-gsa
follow the
instructions in
Authentication Setup for GCP Broker.
If you want to run example to create resources (like CloudPubSubSource, GloudSchedulerSource, etc.) and make your resources' Data Plane work, you need to make authentication configuration in the namespace where your resources reside.
Currently, we support two methods: Workload Identity and Kubernetes Secret. The configuration steps have been automated by the scripts below. If you wish to configure the auth manually, refer to Manually Configure Authentication Mechanism for the Data Plane.
Before applying initialization scripts, make sure:
- Your default zone is set to be the same as your current cluster. You may use
gcloud container clusters describe $CLUSTER_NAME
to get zone and applygcloud config set compute/zone $ZONE
to set it. - Your gcloud
CLI
are up to date. You may usegcloud components update
to update it.
It is the recommended way to access Google Cloud services from within GKE due to its improved security properties and manageability. For more information about Workload Identity see here.
Note:
- If you install the Knative-GCP Constructs with v0.14.0 or older release, please use option 2.
- We assume that you have already enabled Workload Identity in your cluster.
spec.googleServiceAccount
in v0.14.0 is deprecated for security implications. It has not been promoted to v1beta1 and is expected to be removed from v1alpha1 in the v0.16.0 release. Instead,spec.serviceAccountName
has been introduced for Workload Identity in v0.15.0, whose value is a Kubernetes Service Account.
There are two scenarios to leverage Workload Identity for resources in the Data Plane:
-
Non-default scenario:
Apply init_data_plane_gke.sh with parameters:
./hack/init_data_plane_gke.sh [MODE] [NAMESPACE] [K8S_SERVICE_ACCOUNT] [PROJECT_ID]
Parameters available:
MODE
: an optional parameter to specify the mode to use, default todefault
.NAMESPACE
: an optional parameter to specify the namespace to use, default todefault
. If the namespace does not exist, the script will create it.K8S_SERVICE_ACCOUNT
: an optional parameter to specify the k8s service account to use, default tosources
. If the k8s service account does not exist, the script will create it.PROJECT_ID
: an optional parameter to specify the project to use, default togcloud config get-value project
.
Here is an example to run this script if you want to configure non-default Workload Identity in namespace
example
with Kubernetes service accountexample-ksa
:./hack/init_data_plane_gke.sh non-default example example-ksa
After running the script, you will have a Kubernetes Service Account
example-ksa
in namespaceexample
which is bound to the Google Cloud Service Accountevents-sources-gsa
(you just created it in the last step). Remember to put this Kubernetes Service Account name as thespec.serviceAccountName
when you create resources in the example. -
Default scenario:
Instead of manually configuring Workload Identity namespace by namespace, you can authorize the Controller to configure Workload Identity for you.
Apply init_data_plane_gke.sh without parameters:
./hack/init_data_plane_gke.sh
After running this, every time when you create resources, the Controller will create a Kubernetes service account
sources
in the namespace where your resources reside, and this Kubernetes service account is bound to the Google Cloud Service Accountevents-sources-gsa
(you just created it in the last step). What's more, you don't need to put this Kubernetes Service Account name as thespec.serviceAccountName
when you create resources in the example, the Controller will add it for you.A
Condition
WorkloadIdentityConfigured
will show up under resources'Status
, indicating the Workload Identity configuration status.Note: The Controller currently doesn’t perform any access control checks, as a result, the Controller will configure Workload Identity (using Google Service Account
events-sources-gsa
's credential) for any user who can create a resource.
Apply init_data_plane.sh with parameters:
./hack/init_data_plane.sh [NAMESPACE] [SECRET] [PROJECT_ID]
Parameters available:
NAMESPACE
: an optional parameter to specify the namespace to use, default todefault
. If the namespace does not exist, the script will create it.SECRET
: an optional parameter to specify the secret, default togoogle-cloud-key
. If the secret does not exist, the script will create it.PROJECT_ID
: an optional parameter to specify the project to use, default togcloud config get-value project
. If you want to specify the parameterPROJECT_ID
instead of using the default one.
Here is an example to run this script if you want to configure authentication in
namespace example
with the default secret name google-cloud-key
./hack/init_data_plane.sh example
After running the script, you will have a Kubernetes Secret google-cloud-key
in namespace example
which stores the key exported from the Google Cloud
service account events-sources-gsa
(you just created it in the last step).