A Complete Guide to Deploying a Containerized Application Using Managed Instance Groups (MIGs) in Google Cloud (GCP) with Continuous Integration (CICD) — Part 1

Image for post
Image for post
Saturn V Rocket Engines by Good Free Photos

In the first part of this guide, we will walk through setting up a high-availability low-cost solution capable of supporting multiple backend/frontends deployed to Google Cloud (GCP) using managed instance groups (MIGs), Cloud SQL, Cloud Storage, and GCP’s load balancer.

Scenario

We want to deploy a web application consisting of one or more frontends using a client side framework such as ReactJS which connects to one (or more) backends that have been containerized with Docker. Our backend will connect to a CloudSQL instance for storage and all environment variables will be encrypted with GCP’s Key Management Service (KMS). GIT commits to a specific branch on GitHub will trigger a build and deploy of the application. A load balancer will direct traffic and serve as a proxy for HTTPS using a managed certificate.

Pre-requisites

Step 1: Create Networks and Reserve IP

First, we define a Virtual Private Cluster (VPC) that will scope all our resources. We then create a subnet within it with the IP range 10.1.10.0 to 10.1.10.254. If you plan on running more than 254 machines (or close to it) you should change the range accordingly. Select a region to be closest to your users and be consistent throughout. Here we have arbitrarily selected southamerica-east1.

Lastly, we reserve an IPv4 address (you can repeat this step with ip-version=IPV6 if you also need an IPv6 address). This address will be our load balancer’s public IP to which we will redirect our DNS.

# Create a custom VPC networkgcloud compute networks create my-lb-network --subnet-mode=custom
# Create subnet
gcloud compute networks subnets create my-subnet \ --network=my-lb-network \ --range=10.1.10.0/24 \ --region=southamerica-east1
# Reserve the IP addresses
gcloud compute addresses create my-lb-ipv4 \ --ip-version=IPV4 \ --global

Step 2: Create Firewall Rules

By default, the instances in our network will not accept any incoming traffic. Therefore, we need to create rules that allow the traffic we intend to receive. Below, we enable ports 80, 443 and 3000 (adjust based on your needs) from the IP ranges reserved for GCP load balancers. We also allow port 22 (SSH) from any originating IP (consider restricting or eliminating this rule).

# Create firewall rule to allow healthchecks and incoming trafficgcloud compute firewall-rules create my-fw-allow-health-and-proxy \  --network=my-lb-network \  --action=allow \  --direction=ingress \  --target-tags=allow-hc-and-proxy \  --source-ranges=130.211.0.0/22,35.191.0.0/16 \  --rules=tcp:80,tcp:443,tcp:3000
# Create firewall rule to allow SSH gcloud compute firewall-rules create my-fw-allow-ssh \ --network=my-lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22

Step 3: Create First Template and Managed Instance Group

Our backend(s) revolves around two main concepts of the Google Cloud: Instance templates and managed instance groups (MIGs).

The instance template defines the specifications of our instances including CPU, RAM and disk space. There are two noteworthy mentions here: The tags attribute associates our instance template to our previously created firewall rules and the container-image specifies a docker image which the Compute Engine (GCP’s Virtual Machines (VMs)) instances will run when instantiated. Note: At this point, it does not matter which image is specified in the template since we will be replacing this template with a different image when setting up the CICD.

The MIG defines a group of VMs that (as the name suggests) are managed by Google. In our example, we will not implement autoscaling but this could be accomplished with an additional command. In the final command, we create a named-port called port3000 (the name can be arbitrary) which is mapped to port 3000. Named ports are a concept of GCP that refers to the port a load balancer can use to connect to an instance.

# Create instance templategcloud compute instance-templates create-with-container my-first-template \  --custom-cpu=1 \  --custom-memory=2GB \  --boot-disk-size=20GB \  --container-env-file=secrets.dec \  --region=southamerica-east1 \  --subnet=my-subnet \  --tags=allow-hc-and-proxy,allow-ssh \  --container-image gcr.io/my-project-id/my-image-name
# Create the managed instance groupgcloud compute instance-groups managed create my-mig \ --base-instance-name my-instance \ --size 3 \ --template my-first-template \ --region southamerica-east1
# Create port forwarding
gcloud compute instance-groups managed set-named-ports my-mig \ --named-ports port3000:3000 \ --region southamerica-east1

Step 4: Health Check

GCP’s health checks allow monitoring of managed instance groups to ensure that the compute engines are serving the desired application correctly. Health checks are required to enable self-healing (where instance will be replaced if not serving application correctly).

In the example below, we create a health check that makes HTTP requests to the path /health_check on port 3000. By default, the health check polls every 5 seconds. We select a low healthy-threshold (number of times it needs to succeed to consider the instance is healthy) and a high unhealthy-threshold (number of times it needs to fail to consider an instance down) to ensure that working instances are detected quicker and that an instance is not tagged as unhealthy due to a temporary issue (such as a rare network packet loss).

Finally, we assign the health check to our MIG and set an initial-delay (time from boot until considering health check state) of 5 minutes (tune based on how long your instance takes to book).

# Create health checkgcloud compute health-checks create http my-http-check \  --port 3000 \  --request-path=/health_check \  --healthy-threshold=1 \  --unhealthy-threshold=10
# Assign health check to the managed instance groupgcloud compute instance-groups managed update my-mig \ --health-check my-http-check \ --initial-delay 300 \ --region southamerica-east1

Step 5: Create Backend Service

Backend services are a concept of GCP’s load balancers that describes how and where traffic should be directed. Take note of the enable-cdn flag. It enables automatic caching of GET requests which meet certain criteria such as having a Cache-Control: public header.

Finally, we assign our MIG to our backend service and specify the load balancing rule as UTILIZATION. The appropriate scheme is dependent upon your project’s unique specifications. If in doubt, try different options to determine which performs best for your use case.

# Create a backend servicegcloud compute backend-services create my-backend-service \  --protocol HTTP \  --health-checks my-http-check \  --global \  --port-name=port3000 \  --enable-cdn
# Assign instance group to the backend servicegcloud compute backend-services add-backend my-backend-service \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 \ --capacity-scaler=1 \ --instance-group=my-mig \ --instance-group-region=southamerica-east1 \ --global

Step 6: Create Bucket and Backend Bucket

For the client-side (frontend) part of our react application we don’t need a VM to serve it (we can use one if we want to have server-side rendering, etc). Instead, we will opt for the cheaper and seamlessly scalable backend bucket using Cloud Storage.

Below, we create a bucket, grant it public read permission and configure it to serve the index.html file (common entrypoint for ReactJS) whenever a path does not match to an existing file.

Finally, we create a backend-bucket (similar in purpose to GCP’s backend-service) using the newly created bucket.

NOTE: Up to this point there are no files (unless you added them manually) in the bucket. We will be adding those later as part of our CICD.

# Create storage bucketgsutil mb -c standard -l southamerica-east1 -b on gs://my-bucket.example.com.br# Make the bucket publicgsutil iam ch allUsers:objectViewer gs://my-bucket.example.com.br# Configure bucket for webgsutil web set -m index.html -e index.html gs://my-bucket.example.com.br# Create backend-bucket using our newly created bucketsgcloud compute backend-buckets create my-backend-bucket \  --gcs-bucket-name=gs://my-bucket.example.com.br \  --enable-cdn

Step 7: Create the Load Balancer and Path Matchers

First, we create a url-map that will contain the rules that define the routing behavior of our load balancer. By default, it will direct any requests to an undefined path to our bucket. Note: You could create a separate bucket for serving such requests with a custom error message.

The final two commands create redirecting rules that direct requests to my-backend.example.com.br to our backend service and requests to my-bucket.example.com.br to our bucket.

# Create a URL mapgcloud compute url-maps create my-lb-map \  --default-backend-bucket my-backend-bucket
# Add path matcher to the URL map (backend)gcloud compute url-maps add-path-matcher my-lb-map \ --default-service my-backend-service \ --path-matcher-name my-pathmap-backend \ --new-hosts=my-backend.example.com.br
# Add path matcher to the URL map (frontend)gcloud compute url-maps add-path-matcher my-lb-map \ --default-backend-bucket my-backend-bucket \ --path-matcher-name my-pathmap-frontend \ --new-hosts=my-bucket.example.com.br

Step 8: Setting up HTTPS

First, we create an SSL certificate managed by Google (provisioned and automatically renewed) that contains all domain names that belong to our application (up to 100 per certificate and wildcard domains are not supported).

We proceed to creating a proxy which will receive external HTTPS traffic and provide the appropriate certificate so that our backend does not need to worry about it. Note: At the time of this writing, a proxy can have up to 15 certificates.

Finally, we create a forwarding rule that directs traffic incoming to our external IP to our newly created proxy.

# Create a managed certificate gcloud beta compute ssl-certificates create my-mcrt \  --domains my-bucket.example.com.br,my-backend.example.com.br
# Create an https proxy
gcloud compute target-https-proxies create my-https-proxy \ --url-map my-lb-map \ --ssl-certificates my-mcrt
# Create forwarding rules to proxy
gcloud compute forwarding-rules create my-forwarding-rule \ --address=my-lb-ipv4 \ --global \ --target-https-proxy=my-https-proxy \ --ports=443

Step 9: Setting up MySQL Database (Optional)

If our application requires a database, it can be setup as below. Note: We use a VPC peering to create a local (does not go through the internet) connection between the network containing our backend VMs and the database. By default, a public IP will be assigned to the DB but it will be firewalled from any incoming connection.

# Create address range for db private networkgcloud compute addresses create my-sql-network-ranges \  --global \  --purpose=VPC_PEERING \  --prefix-length=24 \  --network=my-lb-network
# Create the VPC peeringgcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=my-sql-network-ranges \ --network=my-lb-network
# Create db instancegcloud beta sql instances create my-database-2 \ --network=my-lb-network \ --tier=db-n1-standard-1 \ --region=southamerica-east1

Conclusion

At this point we have all the infrastructure we need including our load balancer, a backend-service, a backend-bucket and a database. In the next part of this series we will integrate with a Github based CICD (GitOps) using Cloud Build.

Part 2 can be found here.

Written by

A curious minded engineer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store