I have recently been doing some work on a new GCP project, and came a cross a frustrating scenario, so I thought I would create a quick article to halp anyone else that might come across a simular problem.
The issue
The crux of the problem is this:
- All cloud run services (not functions, we are talking about deployed container images here) share the same DNS hostname
- The loadbalancer uses URL Path Maps to route traffic to the correct backend/cloud run service
- Infrastructure (Loadbalancer, DNS, Certificate etc) is managed with Terraform
- You are using Cloud Deploy pipelines to deploy your Cloud Run services.
In this scenario (and someone please tell me if I am doing this wrong or there is a better way) the deployment for each service needs to update a single loadbalancer URL Path Map, and there is no automated GCP way to do this. You are left with either defining your Path Maps in terraform, at which point, any time you add or remove a service you also need to update and re-run your terraform, or you can do it manually (either via the console or CLI).
In an ideal world, this would not be the case, instead you would hope that when you deploy your service you can update the loadbalancer’s URL Path Maps to register your new service. Luckily, I found a way to do this using a custom post-deploy action in Cloud Deploy.
You can use a custom action to spin up a container a perform a custom job. So the solution is to write a script that can update the loadbalancer using the gcloud CLI. Here is the script I came up with.
#!/bin/bash
log() {
echo "$(date +'%Y-%m-%d %H:%M:%S') - $1"
}
LOCK_BUCKET="url-map-lock-bucket"
LOCK_FILE="url-map.lock"
# Define variables
REGION="europe-west2"
PROJECT_ID=${PROJECT_ID}
SERVICE_NAME="${CLOUD_RUN_SERVICE}"
NEG_NAME="${SERVICE_NAME}-neg"
BACKEND_SERVICE_NAME="${SERVICE_NAME}-backend"
URL_MAP_NAME="load-balancer-url-map"
PATH_MATCHER_NAME="shared-matcher"
PATH_RULE="/${SERVICE_NAME}"
# Install required packages
apt-get update && apt-get install -y jq && apt-get clean
# Check if the lock file exists
while gsutil ls gs://${LOCK_BUCKET}/${LOCK_FILE} >/dev/null 2>&1; do
log "Another job is modifying the URL map. Retrying in 10 seconds..."
sleep 10
done
# Acquire the lock
log "Locking the URL map modification" | gsutil cp - gs://${LOCK_BUCKET}/${LOCK_FILE}
trap 'gsutil rm gs://${LOCK_BUCKET}/${LOCK_FILE}' EXIT
# Check if the NEG already exists
if ! gcloud compute network-endpoint-groups describe $NEG_NAME --region=$REGION --project=$PROJECT_ID >/dev/null 2>&1; then
log "Creating network endpoint group: $NEG_NAME"
gcloud compute network-endpoint-groups create $NEG_NAME --region=$REGION --project=$PROJECT_ID --network-endpoint-type=SERVERLESS --cloud-run-service=$SERVICE_NAME
else
log "Network endpoint group $NEG_NAME already exists"
fi
# Create a backend service for the NEG
if ! gcloud compute backend-services describe $BACKEND_SERVICE_NAME --global --project=$PROJECT_ID >/dev/null 2>&1; then
log "Creating backend service: $BACKEND_SERVICE_NAME"
gcloud compute backend-services create $BACKEND_SERVICE_NAME \
--global \
--project=$PROJECT_ID \
--protocol=HTTP \
--timeout=30s \
--load-balancing-scheme=EXTERNAL
else
log "Backend service $BACKEND_SERVICE_NAME already exists"
fi
# Check if the NEG is already attached to the backend service
if ! gcloud compute backend-services describe $BACKEND_SERVICE_NAME --global --project=$PROJECT_ID | grep -q $NEG_NAME; then
log "Attaching NEG $NEG_NAME to backend service $BACKEND_SERVICE_NAME"
gcloud compute backend-services add-backend $BACKEND_SERVICE_NAME --global --project=$PROJECT_ID --network-endpoint-group=$NEG_NAME --network-endpoint-group-region=$REGION
else
log "NEG $NEG_NAME is already attached to backend service $BACKEND_SERVICE_NAME"
fi
log "Checking if path rule $PATH_RULE exists in URL Map $URL_MAP_NAME"
# Fetch existing path rules
EXISTING_PATH_RULES=$(gcloud compute url-maps describe $URL_MAP_NAME --global --project=$PROJECT_ID --format=json | jq -r '.pathMatchers[]? | select(.name=="'"$PATH_MATCHER_NAME"'") | .pathRules | to_entries | map("\(.value.paths[0])=\(.value.service | sub(".*/";""))") | join(",")')
log "Existing path rules found: $EXISTING_PATH_RULES"
if [[ " $EXISTING_PATH_RULES " =~ " $PATH_RULE " ]]; then
log "Path rule $PATH_RULE already exists in URL Map $URL_MAP_NAME"
else
# Check if the path matcher itself exists
PATH_MATCHER_EXISTS=$(gcloud compute url-maps describe $URL_MAP_NAME --global --project=$PROJECT_ID --format="json" 2>/dev/null | jq -r ".pathMatchers[]? | select(.name==\"$PATH_MATCHER_NAME\")")
if [ -z "$PATH_MATCHER_EXISTS" ]; then
log "Path matcher $PATH_MATCHER_NAME does not exist, adding it with path rule $PATH_RULE"
gcloud compute url-maps add-path-matcher $URL_MAP_NAME \
--path-matcher-name=$PATH_MATCHER_NAME \
--default-service=default-backend \
--path-rules="$PATH_RULE=$BACKEND_SERVICE_NAME" \
--global \
--project=$PROJECT_ID
else
log "Path matcher $PATH_MATCHER_NAME exists, adding path rule $PATH_RULE"
# Construct the new path rule in the correct format
if [ -n "$EXISTING_PATH_RULES" ]; then
UPDATED_PATH_RULES="$EXISTING_PATH_RULES,$PATH_RULE=$BACKEND_SERVICE_NAME"
else
UPDATED_PATH_RULES="$PATH_RULE=$BACKEND_SERVICE_NAME"
fi
log " updated path rule is: $UPDATED_PATH_RULES"
log "Deleting existing path matcher $PATH_MATCHER_NAME from URL Map $URL_MAP_NAME"
gcloud compute url-maps remove-path-matcher $URL_MAP_NAME --path-matcher-name=$PATH_MATCHER_NAME --global --project=$PROJECT_ID
# Reapply the path matcher with updated rules
gcloud compute url-maps add-path-matcher $URL_MAP_NAME \
--path-matcher-name=$PATH_MATCHER_NAME \
--default-service=default-backend \
--path-rules="$UPDATED_PATH_RULES" \
--global \
--project=$PROJECT_ID
fi
fi
This script takes a couple of input variables (PROJECT_ID & SERVICE_NAME) from cloud build and updates the loadbalancer with the required path (the service name). You will notice that it also uses a bucket to store a lock file preventing two build jobs from trying to update the loadbalancer at the same time. The script can the be packaged up as a container image and stored in the Artefact Registry, where Cloud Deploy can pull and run it from.
Here is an example skaffol.yaml file that uses such an image
apiVersion: skaffold/v4beta7
kind: Config
metadata:
name: my-function
deploy:
cloudrun: {}
profiles:
- name: dev
manifests:
rawYaml:
- service.yaml
deploy:
cloudrun:
projectid: my-dev
region: europe-west2
customActions:
- name: update-loadbalancer
containers:
- name: update-loadbalancer
image: europe-west2-docker.pkg.dev/myproject/my-scripts/update-loadbalancer:1.1.1
All you ned then is to reference the custom action and pass the environment variables in your clouddeploy.yaml:
apiVersion: deploy.cloud.google.com/v1
kind: DeliveryPipeline
metadata:
name: my-pipeline
description: Pipeline for deploying my service
serialPipeline:
stages:
- targetId: dev
profiles: [dev]
strategy:
standard:
postdeploy:
actions: ["update-loadbalancer"]
deployParameters:
- values:
api_host: "dev.mydomain.co.uk"
---
apiVersion: deploy.cloud.google.com/v1
kind: Target
metadata:
name: dev
annotations:
environment: dev
description: Dev environment for my service
deployParameters:
PROJECT_ID: "myproject"
SERVICE_NAME: "myservice"
service_account: "myservice-sa@myproject.iam.gserviceaccount.com"
requireApproval: false
run:
location: projects/myproject/locations/europe-west2
I hope someone else finds this useful.