
Build & deploy with Google Cloud Build
- Written by John
- Jun 4th, 2020
So far, you’ve been building and deploying your application manually, and you’re getting tired of it. What can you do automate the build and deployment process? You can use a cloud builder, otherwise known as an automated or CI/CD pipeline.
There are many cloud builders out there, each having its distinct pros and cons. However, they all do the same job. They build your application and deploy it into your environment.
Starting with a cloud builder platform
If you’re not new to this, please feel free to skip this section. Those that have never used a cloud builder before let’s learn and use them together.
Before we start, you will need to understand about Git, what it is, what it does and how to use it. Atlassian, the makers of Bitbucket, have a great set of tutorials to get you started.
What cloud builders are out there
The following is a list of cloud builders you can use, somewhat for free or part of a freemium service. Each one comes with Git, allowing you to easily store your source code and utilise it in a pipeline easily.
- Bitbucket by Atlassian
- Gitlab by Gitlab Inc.
- AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy and AWS CodePipeline by Amazon Web Services
- Cloud Source Repositories & Cloud Build by Google
Starting with Cloud Build
There are two sides to Cloud Build, pipelines and triggers. A pipeline is a container that builds, tests and deploys your application into your environment. A trigger is how you start that pipeline, whether it is automatically or manually.
Google has plenty of documentation to help you out, but I’ll focus on what each element means and how you should be structuring your files and parameters. Please complete all of the prerequisites before you start to build your pipeline.
Structuring your Cloud Build file
The basic structure
There are two ways to structure a file, YAML or JSON. YAML is the easier of the two to work with, so we’ll use it.
For each pipeline, there is a minimum set of key value pairs, indentation and sequences (arrays or lists) required to make your pipeline valid.
steps:
- name: "gcr.io/cloud-builders/docker" # you can reference an image from Docker Hub
args: ["build", "-t", "docker_image_name:tag", "-f", "Dockerfile", "."]
As you can see your pipeline starts with steps and listed below in sequences, is the name of the Docker file you require to use for that step and the arguments (command) to run that step.
Arguments can be created in two ways and can be mixed within the same pipeline.
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "docker_image_name:tag", "-f", "Dockerfile", "."]
steps:
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "docker_image_name:tag"
- "-f"
- "Dockerfile"
- "."
Now you understand the basic structure let’s add another step to push the Docker image to a registry.
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "docker_image_name:tag", "-f", "Dockerfile", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "docker_image_name:tag"]
Planning your steps
As with any pipeline, you need to think about the different steps you need to run. You also need to think about your step execution order. You don’t want to be pushing or deploying a Docker image before it’s built.
I’ll use the pipeline for this site as an example. Remember, this site runs in Cloud Run and has an SQL database in Cloud SQL.
- backup Cloud SQL instance
- make credentials folder
- retrieve and write credential to JSON file
- build a Docker image
- push Docker image
- add the latest tag to newly pushed Docker image
- deploy latest Docker image to Cloud Run
Creating your trigger
A trigger is simply a means of automatically or manually starting your pipeline based on a branch or tag. However, creating a trigger has some prerequisites, and is not the only way of running your pipeline in Cloud Build. You can also run a gcloud command to start this process if your code does not exist in Cloud Repositories. Let’s look at both ways of triggering your pipeline.
Submit a build
If you do not have your code or current repo integrated to Cloud Source Repositories, you can submit a build. This will tar your files and upload them to Cloud Build. The pipeline will then execute against the tar.
gcloud builds submit --config cloudbuild.yaml
Use a trigger
Using a trigger requires your code to exist in Cloud Source Repositories. You may be thinking, oh, do I need to manage another repo or do I need to move to Source Repositories? You don’t have to if you’re using Github or Bitbucket. Google has made it so you can sync (mirror) your repo from Github or Bitbucket, so you don’t have to manage another repo.
Once you have your code in Source Repositories, go to Cloud Build > Trigger and CREATE TRIGGER.
Creating a trigger is self-explanatory. However, you need to make sure you correctly define the Event, Source and Build configuration. An event is how the build is triggered, whether that is triggering by commit to a branch or tag. In Source, select your Source Repositories repo and define your regex string that qualifies a commit to start the pipeline. The build configuration is your Cloud Build YAML file.
As an example, you can define your regex as *.*.* in combination with the tag event. Cloud Build will only start builds with three or more numbers or letters in the tag.
Putting your pipeline into practice
Now you have the understanding of how to use Cloud Build let’s put it into practice, using our plan above. We’ll be deploying our app to Cloud Run.
Note 1: I’ll be storing Docker images in GCR, not Docker Hub. My image names will be different as a result. Note 2: Please make sure you change variables/sequences accordingly.
steps:
# 1. backup Cloud SQL instance
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
entrypoint: gcloud
args:
[
"sql",
"backups",
"create",
"--instance=instance_name",
"--project",
"project_id",
]
# 2. make credentials folder
- name: "debian"
args: ["mkdir", "credentials"]
# 3. get secret from Secret Manager
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
entrypoint: "bash"
args:
[
"-c",
"gcloud secrets versions access latest --project project_name --secret=secret_name > credentials/filename_of_secret.json",
]
# 4. build Docker image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/project_name/image_name:tag",
"-f",
"Dockerfile",
".",
]
# 5. push Docker image to GCR
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/project_id/image_name:tag"]
# 6. add latest tag to new image
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
entrypoint: gcloud
args:
[
"container",
"images",
"add-tag",
"gcr.io/project_id/image_name:tag",
"gcr.io/project_id/image_name:latest",
]
# 7. deploy image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "name_of_cloudrun_service"
- "--project"
- "project_id"
- "--image"
- "gcr.io/project_id/image_name:latest"
- "--platform"
- "managed"
- "--region"
- "region_name"
- "--memory=256Mi"
- "--allow-unauthenticated"
Optimising your pipeline
As a starter, the pipeline we’ve built is excellent. However, you’ll notice all steps will run in the order in the YAML file, and each step must complete before the next starts. Now you’ll be thinking to run some steps in parallel with others. Let’s look at doing this.
Cloud Build has a waitFor attribute which allows you to either run steps in parallel or tell steps to wait for another step(s). When you use waitFor, you must add it to each step. You should name each section using the id attribute. You can use the following values.
- ’-’ - a hyphen will run steps in parallel
- ‘step_id’ - a step id to wait for
Before applying this knowledge to the pipeline, we need to think about which steps can run in parallel, and which can’t be. The following is a table of the waitFor plan.
step no. | step id | action attribute | action notes |
---|---|---|---|
1 | 0 | ’-‘ | this is the first step and can start straight away |
2 | 1 | ’-‘ | this step is creating a directory which can start straight away |
3 | 2 | ’1’ | this step adds a file to the credentials folder and has to wait for step no. 2 |
4 | 3 | ’2’ | this step builds the Docker image and has to wait for step no. 2 & 3 |
5 | 4 | ’3’ | this step pushes the built image to GCR, but is dependent upon the build step completing |
6 | 5 | ’4’ | this step adds the latest tag to the built image, but has to start after it’s pushed to the registry |
7 | 6 | ’0’,‘5’ | this step deploys the application and tags the image with latest. It also requires the SQL backup to complete |
steps:
# 1. backup Cloud SQL instance
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
id: "0"
entrypoint: gcloud
args:
[
"sql",
"backups",
"create",
"--instance=instance_name",
"--project",
"project_id",
]
waitFor:
- "-"
# 2. make credentials folder
- name: "debian"
id: "1"
args: ["mkdir", "credentials"]
waitFor:
- "-"
# 3. get secret from Secret Manager
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
id: "2"
entrypoint: "bash"
args:
[
"-c",
"gcloud secrets versions access latest --project project_name --secret=secret_name > credentials/filename_of_secret.json",
]
waitFor:
- "1"
# 4. build Docker image
- name: "gcr.io/cloud-builders/docker"
id: "3"
args:
[
"build",
"-t",
"gcr.io/project_name/image_name:tag",
"-f",
"Dockerfile",
".",
]
waitFor:
- "2"
# 5. push Docker image to GCR
- name: "gcr.io/cloud-builders/docker"
id: "4"
args: ["push", "gcr.io/project_id/image_name:tag"]
waitFor:
- "3"
# 6. add latest tag to new image
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
id: "5"
entrypoint: gcloud
args:
[
"container",
"images",
"add-tag",
"gcr.io/project_id/image_name:tag",
"gcr.io/project_id/image_name:latest",
]
waitFor:
- "4"
# 7. deploy image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
id: "6"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "name_of_cloudrun_service"
- "--project"
- "project_id"
- "--image"
- "gcr.io/project_id/image_name:latest"
- "--platform"
- "managed"
- "--region"
- "region_name"
- "--memory=256Mi"
- "--allow-unauthenticated"
waitFor:
- "0"
- "5"
This example is very straight forward and doesn’t bring too much reward to this. However, it does increase the pipeline speed by a minute or so. If your pipeline has tests, you can run your tests in parallel which bring you a more substantial benefit of efficiency.
Recommendations
- variablise as much of your pipeline as you possibly can
- configure your steps/run steps in parallel
- optimise building your Docker images
- securely retrieve your sensitive keys/credentials from KMS or Secret Manager
- increase your pipeline timeout if it’s running for more than 10 minutes (600s)
- understand the different Cloud Builders Google offers