How to Building Anthos Clusters on GCP and AWS

Biswanath Giri
8 min readAug 2, 2024

--

Introduction

  • Overview of Anthos: Briefly introduce Anthos as Google Cloud’s hybrid and multi-cloud platform designed to manage applications across GCP and other cloud providers like AWS.
  • Importance of Multi-Cloud Environments: Discuss why organizations might use multi-cloud setups and the benefits of having a unified management platform.

Prerequisites

  • Basic Requirements: Outline the basic requirements needed before starting, such as
  • Google Cloud Platform (GCP) account
  • Amazon Web Services (AWS) account
  • Basic knowledge of Kubernetes
  • Familiarity with Google Cloud Console and AWS Management Console

Setting Up Anthos on GCP

Create a GCP Project

  • Authenticate your account and set the correct project.
  • Enable necessary APIs (e.g., Kubernetes Engine API, Anthos API).
  • Create and register an Anthos GKE cluster on Google Cloud.
  • Create and register an Anthos cluster on AWS.
  • Access the Anthos cluster on AWS via Connect Gateway.
  • Review cluster configurations for AWS and Google Cloud with the GKE interface on the Google Cloud Console.
  • Deploy workloads and services to Anthos clusters on Google Cloud and AWS.
  • Get information about your multi-cloud deployment using the Anthos dashboard on the Google Cloud Console.

Set up the following environment variables, which will be used in scripts

export PROJECT_ID=$(gcloud config get-value project)
export GCP_CLUSTER_NAME=gcp-cluster
export GCP_CLUSTER_ZONE=us-central1-b

In the Cloud Shell, enable APIs required for the tasks

gcloud services enable \
gkemulticloud.googleapis.com \
connectgateway.googleapis.com \
cloudresourcemanager.googleapis.com \
container.googleapis.com \
gkeconnect.googleapis.com \
gkehub.googleapis.com \
serviceusage.googleapis.com \
anthos.googleapis.com \
logging.googleapis.com \
monitoring.googleapis.com \
stackdriver.googleapis.com \
storage-api.googleapis.com \
storage-component.googleapis.com \
securetoken.googleapis.com \
sts.googleapis.com

Task 2. Start the setup of your GKE (on Google Cloud) cluster

To create your cluster, in Cloud Shell, run the following command

gcloud container clusters create $GCP_CLUSTER_NAME \
--zone $GCP_CLUSTER_ZONE \
--machine-type "n1-standard-2" \
--enable-ip-alias \
--num-nodes=2 \
--workload-pool=$PROJECT_ID.svc.id.goog \
--release-channel=regular \
--project=$PROJECT_ID

Task 3. Review the provisioned AWS resources and prepare the environment

Configure the AWS CLI

open a new Cloud Shell tab.

set the project ID as an environment variable

export PROJECT_ID=$(gcloud config get-value project)
gcloud config set project $PROJECT_ID

Download and install the AWS CLI

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Configure the AWS CLI

At the prompt, enter the following:

Property Value
AWS Access Key ID ____
AWS Secret Access Key ____
Default region name us-east-1
Default output format json
  1. Initialize the environment variables with the references to AWS resources:
AWS_CLUSTER=aws-cluster
AWS_REGION=us-east-1
VPC=${AWS_CLUSTER}-anthos-vpc
PRIVATE_SUBNET_1=${VPC}-private-cp-us-east-1a
PRIVATE_SUBNET_2=${VPC}-private-cp-us-east-1b
PRIVATE_SUBNET_3=${VPC}-private-cp-us-east-1c
CONTROL_PLANE_PROFILE=${AWS_CLUSTER}-anthos-cp-instance-profile
NODE_POOL_IAM_INSTANCE_PROFILE=aws-cluster-anthos-np-instance-profile

Get the AWS VPC and the subnets, and store their IDs

VPC_ID=`aws ec2 describe-vpcs \
--filters Name=tag:Name,Values=$VPC \
--query Vpcs[].VpcId --output text`
PRIVATE_SUBNET_ID_1=`aws ec2 describe-subnets \
--filters Name=tag:Name,Values=$PRIVATE_SUBNET_1 \
--query "Subnets[].SubnetId" --output text`
PRIVATE_SUBNET_ID_2=`aws ec2 describe-subnets \
--filters Name=tag:Name,Values=$PRIVATE_SUBNET_2 \
--query "Subnets[].SubnetId" --output text`
PRIVATE_SUBNET_ID_3=`aws ec2 describe-subnets \
--filters Name=tag:Name,Values=$PRIVATE_SUBNET_3 \
--query "Subnets[].SubnetId" --output text`

Get the AWS API Roles that will be used by the Anthos Multi-Cloud API to create and manage your clusters and node pools

API_ROLE_ARN=$(aws iam list-roles \
--query 'Roles[?RoleName==`aws-cluster-anthos-api-role`].Arn' \
--output text)

Get the AWS Key Management Service (KMS) keys

ENCRYPTION_KEY=$(aws kms describe-key \
--key-id alias/aws-cluster-database-encryption-key \
--query 'KeyMetadata.Arn' --output text)

Task 4. Create the Anthos cluster on AWS

Create the Anthos cluster on AWS

gcloud container aws clusters create $AWS_CLUSTER \
--cluster-version 1.27.12-gke.800 \
--aws-region us-east-1 \
--location=us-east4 \
--fleet-project $PROJECT_ID \
--vpc-id $VPC_ID \
--subnet-ids $PRIVATE_SUBNET_ID_1,$PRIVATE_SUBNET_ID_2,$PRIVATE_SUBNET_ID_3 \
--pod-address-cidr-blocks 10.2.0.0/16 \
--service-address-cidr-blocks 10.1.0.0/16 \
--role-arn $API_ROLE_ARN \
--iam-instance-profile $CONTROL_PLANE_PROFILE \
--database-encryption-kms-key-arn $ENCRYPTION_KEY \
--config-encryption-kms-key-arn $ENCRYPTION_KEY \
--tags google:gkemulticloud:cluster=$AWS_CLUSTER

Switch back to the browser tab with Cloud Shell open, and when the command returns, generate an asymmetric private key and import its public key into AWS

ssh-keygen -t rsa -m PEM -b 4096 -C "$USER" \
-f SSH_PRIVATE_KEY -N "" 1>/dev/null

aws ec2 import-key-pair --key-name SSH_KEY_PAIR_NAME \
--public-key-material fileb://SSH_PRIVATE_KEY.pub

Add a node pool to your cluster

gcloud container aws node-pools create pool-0 \
--cluster $AWS_CLUSTER \
--location=us-east4 \
--node-version 1.26.2-gke.1001 \
--min-nodes 1 \
--max-nodes 5 \
--max-pods-per-node 110 \
--root-volume-size 50 \
--subnet-id $PRIVATE_SUBNET_ID_1 \
--iam-instance-profile $NODE_POOL_IAM_INSTANCE_PROFILE \
--config-encryption-kms-key-arn $ENCRYPTION_KEY \
--ssh-ec2-key-pair SSH_KEY_PAIR_NAME \
--tags google:gkemulticloud:cluster=$AWS_CLUSTER
  1. Obtain the credentials to your cluster:
gcloud container aws clusters get-credentials $AWS_CLUSTER --location=us-east4
kubectx aws=.

Check the cluster information

kubectl cluster-info

Verify that you can see your worker nodes

kubectl get nodes

To authorize the Kubernetes workload identity gke-system/gke-telemetry-agent to write logs to Cloud Logging and write metrics to Cloud Monitoring, run this command

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${PROJECT_ID}.svc.id.goog[gke-system/gke-telemetry-agent]" \
--role=roles/gkemulticloud.telemetryWriter

Task 5. Connect the Anthos cluster on AWS with the Google Cloud Console

  1. In your browser, switch to the tab with Google Cloud Console.
  2. On the Navigation menu, click Kubernetes Engine > Clusters.
  3. In the row for aws-cluster in the cluster list, click the 3-dots menu.
  4. Select Log in, select Use your Google Identity to log-in, and then click Login.
  5. Click on the aws-cluster entry to display information about your Anthos cluster on AWS.
  6. On the Navigation menu, click Anthos > Overview. One cluster is displayed.
  7. Click View all clusters.
  8. Click on the aws-cluster entry.

Task 6. Register the GKE cluster on the Anthos Hub

To configure kubectx with a short name for the kubectl context that is used to manage the Google Cloud cluster, run the following command

gcloud container clusters get-credentials gcp-cluster \
--zone us-central1-b \
--project $PROJECT_ID
kubectx gcp=.

Register the GKE cluster on the Anthos Hub

gcloud beta container fleet memberships register $GCP_CLUSTER_NAME \
--gke-cluster=$GCP_CLUSTER_ZONE/$GCP_CLUSTER_NAME \
--enable-workload-identity
  1. In the Cloud Console, on the Navigation menu, click Anthos > Overview. Cluster Status should now list two available clusters.
  2. Click View all clusters. A list displays your GKE on Google Cloud cluster.
  3. Click on the gcp-cluster entry to see cluster details

Task 7. Deploy applications to your clusters

  1. Ensure that kubectl is configured in this terminal window to point to your Google Cloud cluster:
kubectx gcp

Create a manifest for a Kubernetes Deployment of a simple application

cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment-50001
spec:
selector:
matchLabels:
app: products
department: sales
replicas: 3
template:
metadata:
labels:
app: products
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
EOF

Create the deployment on your Google Cloud cluster

kubectl apply -f deployment.yaml

In the Cloud Console, go to Kubernetes Engine > Workloads, and verify that the Deployment has been created and the pods are running

kubectl get pods --selector=app=products

Return to the Cloud Shell tab. Create a Kubernetes service manifest for your application

cat <<EOF > service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-lb-service
spec:
type: LoadBalancer
selector:
app: products
department: sales
ports:
- protocol: TCP
port: 80
targetPort: 50001
EOF

Create the service

kubectl apply -f service.yaml

In the Cloud Console, go to Kubernetes Engine > Services & Ingress, and verify that the Service has been created and the external IP address has been assigned. It make take a minute or two until the service is ready

kubectl get service my-lb-service

Doing a simple deployment to your Anthos on AWS cluster

Now, you will do the same deployment to your AWS cluster.

  1. Switch to the browser tab with Cloud Shell.
  2. Switch to the second Cloud Shell tab where you created the AWS cluster.
  3. Ensure that kubectl is configured to manage your AWS cluster
kubectx aws

Apply the deployment and service manifest to your AWS cluster:

kubectl apply -f ~/deployment.yaml
kubectl apply -f ~/service.yaml

Confirm that the deployment was successfully created

kubectl get pods

Confirm that the service was successfully created

kubectl get services
  1. Switch to the browser tab with the Google Cloud Console.
  2. On the Navigation menu, click Kubernetes Engine > Workloads.
  3. A second my-deployment-50001 entry is displayed, this time for aws-cluster.
  4. In the menu pane, click Services & Ingress.
  5. Two my-lb-service entries, one for each cluster, are displayed.
  6. Click the link in the Endpoints column for the aws-cluster my-lb-service.

Reference Link to know more about it

https://partner.cloudskillsboost.google/focuses/18555?parent=catalog

Conclusion

Building Anthos clusters across GCP and AWS offers a powerful solution for managing multi-cloud environments with a unified approach. By leveraging Anthos, organizations can streamline their operations, ensure consistency across different cloud platforms, and enhance the scalability and resilience of their applications.

Through this guide, you’ve learned how to set up Anthos on both GCP and AWS, establish seamless connectivity between the two platforms, and deploy and manage applications effectively. This multi-cloud strategy not only provides flexibility but also mitigates the risk of vendor lock-in, giving you the freedom to leverage the best features and services from both Google Cloud and Amazon Web Services.

As you continue to implement and manage your Anthos clusters, keep in mind the importance of adhering to best practices for security, cost management, and performance optimization. Regularly monitor and adjust your configurations to align with your evolving business needs and technological advancements.

About Me

As businesses move towards cloud-based solutions, I provide my expertise to support them in their journey to the cloud. With over 15 years of experience in the industry, I am currently working as a Google Cloud Principal Architect. My specialization is in assisting customers to build highly scalable and efficient solutions on Google Cloud Platform. I am well-versed in infrastructure and zero-trust security, Google Cloud networking, and cloud infrastructure building using Terraform. I hold several certifications such as Google Cloud Certified, HashiCorp Certified, Microsoft Azure Certified, and Amazon AWS Certified.

Multi-Cloud Certified :

1. Google Cloud Certified — Cloud Digital Leader.
2. Google Cloud Certified — Associate Cloud Engineer.
3. Google Cloud Certified — Professional Cloud Architect.
4. Google Cloud Certified — Professional Data Engineer.
5. Google Cloud Certified — Professional Cloud Network Engineer.
6. Google Cloud Certified — Professional Cloud Developer Engineer.
7. Google Cloud Certified — Professional Cloud DevOps Engineer.
8. Google Cloud Certified — Professional Security Engineer.
9. Google Cloud Certified — Professional Database Engineer.
10. Google Cloud Certified — Professional Workspace Administrator.
11. Google Cloud Certified — Professional Machine Learning.
12. HashiCorp Certified — Terraform Associate
13. Microsoft Azure AZ-900 Certified
14. Amazon AWS-Practitioner Certified

I assist professionals and students in building their careers in the cloud. My responsibility is to provide easily understandable content related to Google Cloud and Google Workspace,aws .azure. If you find the content helpful, please like, share and subscribe for more amazing updates. If you require any guidance or assistance, feel free to connect with me.

YouTube:https://www.youtube.com/@growwithgooglecloud

Topmate :https://topmate.io/gcloud_biswanath_giri

Medium:https://bgiri-gcloud.medium.com/

Telegram: https://t.me/growwithgcp

Twitter: https://twitter.com/bgiri_gcloud

Instagram:https://www.instagram.com/multi_cloud_boy/

LinkedIn: https://www.linkedin.com/in/biswanathgiri/

GitHub:https://github.com/bgirigcloud

Facebook:https://www.facebook.com/biswanath.giri

Linktree:https://linktr.ee/gcloud_biswanath_giri

and DM me,:) I am happy to help!!

--

--

Biswanath Giri

Cloud & AI Architect | Empowering People in Cloud Computing, Google Cloud AI/ML, and Google Workspace | Enabling Businesses on Their Cloud Journey