
Using Cloud Endpoints with Cloud Run, featuring ESPv2
- Written by John
- Jul 2nd, 2020
What is Cloud Endpoints?
Firstly, what is Cloud Endpoints? Cloud Endpoints is a GCP service that allows you to create and manage APIs to backend services hosted in GCP. When compared to Apigee, Cloud Endpoints is a lightweight service anyone can use. In comparison, Apigee is an Enterprise-grade platform.
What does that mean? It means where you have a backend service, with HTTP RESTful endpoints, you can front requests for your backend service with Cloud Endpoints. Cloud Endpoints can fully manage authorisation and authentication, quota limitations, etc. You can “hide” your backend service from public view, pushing all traffic to be fronted by a proxy.
Cloud Endpoints architecture
See architecture documentation for Cloud Endpoints.
ESPv1 and ESPv2
Cloud Endpoints uses a proxy to front all traffic to your backend API. ESP (Extensible Service Proxy) is what will function as your proxy.
There are two versions of ESP. ESPv1 is an Nginx-based proxy that is modified to be highly-scalable and to work as a Google product. ESPv2 is an Envoy-based, highly scalable product that promises to have a small footprint on your infrastructure.
ESPv2 is in beta and can only be used with OpenAPI, not gRPC.
What we’ll be doing
- create a simple API in Python 3, using Flask
- use API key based authorisation
- use server-to-server based OAuth2.0 authentication, also known as Bearer-based authentication
All required files are stored here.
Creating the API service
The API service we’ll create is fundamental, but it is good enough to allow you to understand Cloud Endpoints concepts and to apply an HTTP endpoint to Cloud Run.
Create an app.py file in a directory.
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/api/status', methods=['GET'])
def get_status():
return jsonify(message='Server is up!'), 200
@app.errorhandler(400)
def bad_request_error(e):
return jsonify(code=400,
message='Bad request',
detailedMessage='{}'.format(e)), 400
@app.errorhandler(404)
def not_found_error(e):
return jsonify(code=404,
message='Path not found',
detailedMessage='{}'.format(e)), 404
@app.errorhandler(405)
def method_not_allowed_error(e):
return jsonify(code=405,
message='Method not allowed',
detailedMessage='{}'.format(e)), 405
@app.errorhandler(Exception)
def internal_server_error(e):
return jsonify(code=500,
message='Internal server error',
detailedMessage='{}'.format(e)), 500
if __name__ == '__main__':
app.run(host='127.0.0.1', port='8080')
Run the app and navigate to https://{port}-cs-{id}-default.{region}.cloudshell.dev/api/status
in your browser. You should get the response of {"message": "Server is up!"}
.
You can use Postman to send the same request, receiving the same response.
This code will be the base for our API service. There’s no authentication and authorisation in the service, which means anyone can access it. For this API, that is not an issue as nothing sensitive is integrated with it.
The main reason I have left the service in this state is to show you how powerful Cloud Endpoints can be. However, you should always ensure your application(s) follow security best practices and principles, especially for enterprise organisations.
Erroneous responses from the app
The following table describes which error codes the client will receive when making a request. Postman will allow you to send requests to your endpoints easily.
We are also capturing 500 or exception errors and presenting a 500 Internal Server Error response back to the client, in case something goes wrong.
HTTP method | Endpoint | Error code | Error message |
---|---|---|---|
GET | /api/status | 200 | {"message": "Server is up!"} |
POST | /api/status | 405 | {"code": "405", "message": "Method not allowed"} |
GET | /api | 404 | {"code": 404, "message": "Path not found"} |
Deploy your API service to Cloud Run
Building the Docker image
Let’s build the app and deploy it to Cloud Run.
You’ll need a requirements.txt file and a Dockerfile to be able to create a Docker image.
# requirements.txt
flask==1.1.1
gunicorn==20.0.4
# Dockerfile
FROM python:3.8-slim-buster
EXPOSE 8080
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# Install pip requirements
ADD requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
ADD . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--bind", "0.0.0.0:8080", "app:app"]
Now we can build and run the Docker image to test locally.
docker build --pull --rm -f "Dockerfile" -t gcr.io/project_id/image_name:0.1.0 .
docker run --rm -d -p 8080:8080/tcp gcr.io/project_id/image_name:0.1.0
You can make the same requests in your browser or Postman as we did earlier to verify the image is working as expected.
Deploying the Docker image
Now we have a working Docker image we can deploy this image to Cloud Run using Cloud Build or gcloud SDK. As we are working in a new project, we need to enable the required API’s and push the Docker image to Container Registry.
While we’re enabling API’s, we may as well enable Cloud Endpoints API.
- iam.googleapis.com
- cloudapis.googleapis.com
- run.googleapis.com
- endpoints.googleapis.com
- storage-api.googleapis.com
- storage-component.googleapis.com
- servicemanagement.googleapis.com
- logging.googleapis.com
- cloudbuild.googleapis.com
- servicecontrol.googleapis.com
- containerregistry.googleapis.com
For Terraform addicts, you can use the following Terraform files. You’ll need to create or enable the following activities as prerequisites for Terraform.
- Cloud Resource Manager API
- Terraform service account with the permissions
- Security Admin
- Service Usage Admin
# main.tf
provider "google" {
project = var.gcp_project_name
region = var.gcp_region
zone = var.gcp_zone
version = "~> 3.26"
}
# Enable prerequsite APIs
resource "google_project_service" "gcp_services_prerequisites" {
count = length(var.gcp_api_list_prereq)
service = var.gcp_api_list_prereq[count.index]
disable_dependent_services = false
}
# Enable required APIs
resource "google_project_service" "gcp_services" {
count = length(var.gcp_api_list)
service = var.gcp_api_list[count.index]
disable_dependent_services = false
depends_on = [google_project_service.gcp_services_prerequisites]
}
# terraform.tfvars
gcp_project_name = "project-id"
gcp_region = "us-east1"
gcp_zone = "us-east1-b"
gcp_api_list_prereq = [
"iam.googleapis.com", # IAM (Identity and Access Management) API
"cloudapis.googleapis.com", # Cloud API
"run.googleapis.com", # Cloud Run API
"endpoints.googleapis.com", # Cloud Endpoints API
"storage-api.googleapis.com", # Cloud Storage JSON API
"storage-component.googleapis.com", # Cloud Storage API
"servicemanagement.googleapis.com", # Service Management API
"logging.googleapis.com", # Stackdrive Logging API
"cloudbuild.googleapis.com" # Cloud Build API
]
gcp_api_list = [
"servicecontrol.googleapis.com", # Service Control API
"containerregistry.googleapis.com" # Container Registry API
]
# variable.tf
variable "gcp_project_name" {
description = "GCP Project name."
type = string
}
variable "gcp_region" {
description = "Google Cloud region"
type = string
}
variable "gcp_zone" {
description = "Google Cloud region"
type = string
}
variable "gcp_api_list" {
description = "List of GCP service to enable for a project."
type = list
}
variable "gcp_api_list_prereq" {
description = "List of prerequisite GCP services to enable for a project."
type = list
}
Once the APIs are enabled, we can push and deploy the image.
gcloud auth configure-docker
docker push gcr.io/project_id/image_name:0.1.0
gcloud run deploy example-api --image=gcr.io/project_id/image_name:0.1.0 --platform=managed --region=region --allow-unauthenticated --project=project_id
Once deployment has completed, you can send requests to the endpoint.
Now we know the backend service is working as intended we need to redeploy the service with the —no-allow-unauthenticated parameter. Not allowing unauthenticated requests ensures the service isn’t publicly available. All requests have to come from a service account.
gcloud run deploy example-api --image=gcr.io/project_id/image_name:0.1.0 --platform=managed --region=region --no-allow-unauthenticated --project=project_id
If you send requests to the redeployed service, you’ll get a 403 Forbidden error.
Make sure you keep a note of the URL. We’ll use this later when we create our OpenAPI definition.
Back in 2019, I made a post titled Migrating Wordpress to App Engine - Configuring GCP, where I showed you how you could migrate or set up Ghost in App Engine. Naturally, Cloud SQL is used to host the database for the Ghost installation running in App Engine. I was never happy having an insecure connection to Cloud SQL, as well as having unused databases, so I scrapped the old instance and rebuilt a new one from scratch. …w to do this. Before we get started …* [Google documentation] Troubleshooting Cloud SQL issues [Google documentation] Configuring SSL/TLS certificates for Cloud SQL [Google documentation] Connecting to Cloud SQL instances
Using API key auth with Cloud Endpoints
Resources
Configuring & deploying OpenAPI definition
Now you understand the basics of Cloud Run, we can start to add authentication to our API service. We need to do the following before we have a valid endpoint with API key auth.
- deploy an ESPv2 default Docker image
- create an OpenAPI definition
- deploy the OpenAPI definition
- build an ESPv2 Docker image
- deploy the ESPv2 Docker image
- create an API key
Deploy ESPv2 default Docker image
As part of the OpenAPI definition, we need to obtain a valid hostname of the gateway service that will front all requests to our API service. We can only obtain a valid hostname for our proxy once we’ve deployed something to Cloud Run. To achieve this, we’ll deploy the default ESPv2 Docker image.
gcloud run deploy CLOUD_RUN_SERVICE_NAME --image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" --allow-unauthenticated --platform=managed --region=us-east1 --project=ESP_PROJECT_ID
If you notice from the gcloud command, we won’t be blocking any traffic by default. The proxy service itself will do that for us.
Once the proxy service has deployed, take note of the hostname. We’ll use this in the OpenAPI definition.
Creating an OpenAPI definition
OpenAPI, or what was known as Swagger, is a standard for creating APIs. Cloud Endpoints supports up to version 2.0, despite version 3.0.3 being available.
You need to create a basic structure of the API definition. Google has you covered to get you up and running quickly.
If you do not understand Swagger/OpenAPI definitions, please read up on them.
Create a YAML file using the following from Google’s documentation.
swagger: "2.0"
info:
title: Cloud Endpoints + Cloud Run
description: Sample API on Cloud Endpoints with a Cloud Run backend
version: 1.0.0
host: HOST
schemes:
- https
produces:
- application/json
x-google-backend:
address: https://hello-HASH-uc.a.run.app
protocol: h2
paths:
/hello:
get:
summary: Greet a user
operationId: hello
responses:
"200":
description: Success
schema:
type: string
The API definition is incorrect for our service; we need to modify the API definition as a result.
swagger: "2.0"
info:
title: Example API
description: Sample API on Cloud Endpoints with a Cloud Run backend
version: 0.1.0
host: gateway-1234abcd.uc.a.run.app
schemes:
- https
produces:
- application/json
x-google-backend:
address: https://example-api-1234abcd.uc.a.run.app
protocol: h2
security:
- api_key: []
paths:
/api/status:
get:
summary: Status of the server
operationId: get_status
responses:
"200":
description: Success
schema:
type: string
securityDefinitions:
api_key:
type: "apiKey"
name: "key"
in: "query"
There are a few important notes here. Please reference Google’s documentation, as noted above.
Host - is the hostname of the proxy or gateway when we deployed the ESPv2 default Docker image.
x-google-backend address - this is the full URL of the Python service we deployed earlier.
Paths - this needs to reference the app.route in the app.py file. As we create more routes, we’ll need to create more paths with its config.
securityDefinitions - we’ve added this section to define a security section called api_key that uses API keys to authenticate requests.
security api_key - we’ve added the api_key security definition to the whole of the OpenAPI definition.
Deploy the OpenAPI definition
Now we have created the OpenAPI definition for the route in our service we need to deploy it to Cloud Endpoints.
gcloud endpoints services deploy openapi-definition.yaml --project=project_id
When the deployment is successful, please note the CONFIG_ID in the response. It’ll follow the naming convention [YYYY-MM-DDrx] = [2020-01-01r0].
If you want to validate you have a valid OpenAPI definition add —validate-only to the end of the deployment command.
The first deployment will take some time to run. Please be patient. If you run into issues, check the troubleshooting page from Google.
Build ESPv2 Docker image
We have our OpenAPI definition deployed. We have our API service deployed in Cloud Run. Now we need to build an ESPv2 Docker image that holds the OpenAPI definition. The OpenAPI definition is what allows the ESPv2 proxy to understand what to do with requests; how to authenticate them and where to send them.
Fortunately, Google has created a script to do this for us. We only need to input the CONFIG_ID of our OpenAPI definition, the hostname of our proxy and our project id.
Download the script, make it executable and run the follow command.
./gcloud_build_image -s CLOUD_RUN_PROXY_HOSTNAME -c CONFIG_ID -p ESP_PROJECT_ID
After a couple of minutes, the ESPv2 Docker image will complete its build and will be available in Container Registry. The name of the image will follow this naming convention.
gcr.io/ESP_PROJECT_ID/endpoints-runtime-serverless:CLOUD_RUN_HOSTNAME-CONFIG_ID
If you require to different Docker image name, you can modify the script. When the OpenAPI definition is changed, the ESPv2 script has to be run against the new, deployed OpenAPI definition.
Deploy ESPv2 Docker image
Now your ESPv2 proxy has been built you need to deploy it to Cloud Run.
gcloud run deploy CLOUD_RUN_SERVICE_NAME --image="gcr.io/ESP_PROJECT_ID/endpoints-runtime-serverless:CLOUD_RUN_HOSTNAME-CONFIG_ID" --allow-unauthenticated --platform=managed --region=us-east1 --project=ESP_PROJECT_ID
You can navigate to your proxy hostname with /api/status on the end of the URL. If everything has gone to plan, you’ll receive an unauthenticated error message.
Creating your API key
We need to create an API key that links to the API service that we’ve created. When you deploy an API definition to Cloud Endpoints, Google will create a private API internally. This API can be enabled and disabled the same way Google’s API’s are. You can call the private API from other GCP projects, providing your projects are part of an organisation.
Go to APIs & Services > Dashboard > Enable APIs and Services > enable your private API.
In the private API, click on Credentials on the left and create an API key. Enabling the private API will authenticate and authorise requests made to your proxy.
You can make your requests to your proxy.
gateway-1234abcd.uc.a.run.app/api/status?key=API_KEY
Using Bearer token auth with Cloud Endpoints
After successfully creating an OpenAPI definition, and building an ESPv2 proxy for API key authentication, moving to bearer authentication will be easy.
Resources
How bearer auth works in Cloud Endpoints
There are different types of bearer authentication you can use with Cloud Endpoints. You can use server-to-server or OAuth2.0 (Google ID token) auth. We’ll be using server-to-server auth. However, OAuth works very similarly to server-to-server, so do some reading if you require users to interact with your API.
The following notes are the steps the proxy will follow to authenticate the request. Use these notes when referencing the Cloud Endpoints architecture.
- request made to the proxy
- Service Control authenticates the bearer token
- the bearer token is decoded and checks the following:
- the token is not expired
- issuer is correct
- verifies the signature - checks the private certificate in the token against the public certificate
- the bearer token is decoded and checks the following:
- the proxy forwards the request to the destination
Configuring the OpenAPI definition for bearer auth
Creating a service account
As we have no authentication in our service, we only need to change auth on the proxy. Before we can update our definition, we need to create a service account. The service account, and one of its keys, will be used to authenticate any requests by creating a bearer token.
Go to IAM > Service accounts and create a service account, along with a private key.
You can use the following Python code to create a bearer token. You can find more details of bearer tokens at JWT.io.
from datetime import datetime, timedelta
import jwt # Uses pyjwt library
import json
iat = datetime.utcnow()
exp = iat + timedelta(seconds=3600)
iss = 'email_address_of_sa'
kid = 'private_key_id'
aud = 'hostname_of_your_gateway'
jwt_payload = {"iat": iat, "exp": exp, "iss": iss, "aud": aud, "data": {"name": "John"}}
jwt_headers = {"iat": str(iat), "exp": str(exp), "iss": iss, "aud": aud}
private_key = 'RSA_certificate_in_your_private_key'
def generate_jwt():
signed_jwt = jwt.encode(jwt_payload, private_key, algorithm='RS512')
print(signed_jwt.decode()+'\n')
if __name__ == "__main__":
generate_jwt()
Update OpenAPI definition
You need to update your OpenAPI definition by adding service account bearer auth to the securityDefinitions section. You can then reference this security definition to a particular path or the host.
securityDefinitions:
bearer:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "email_of_sa"
x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/email_of_sa"
x-google-jwt-locations:
- header: "Authorization"
value_prefix: "Bearer "
Once you’ve added the new security definition, your YAML file should look something like the following.
swagger: "2.0"
info:
title: Example API
description: Sample API on Cloud Endpoints with a Cloud Run backend
version: 0.1.0
host: gateway-1234abcd.uc.a.run.app
schemes:
- https
produces:
- application/json
x-google-backend:
address: https://example-api-1234abcd.uc.a.run.app
protocol: h2
security:
- bearer: []
paths:
/api/status:
get:
summary: Status of the server
operationId: get_status
responses:
"200":
description: Success
schema:
type: string
securityDefinitions:
api_key:
type: "apiKey"
name: "key"
in: "query"
bearer:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "email_of_sa"
x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/email_of_sa"
x-google-jwt-locations:
- header: "Authorization"
value_prefix: "Bearer "
I have replaced the api_key authentication, on the host, with service account bearer authentication.
Don’t forget to update the bearer configuration to match your service account details.
Deploy the OpenAPI definition, rebuild and deploy your gateway service. Once done, generate a bearer token and start making requests to the proxy. Ensure you place the bearer token in your Authorization header.
Bonus - Using different authentication methods
As a quick bonus, this section describes how you can use different types of authentication on different paths.
You will need to add the additional route in your Python app and update your OpenAPI definition accordingly. I’ve added two more routes to separate the different authentication methods.
Once you’ve updated your OpenAPI definition, deploy it to Cloud Endpoints. You will need to rebuild and deploy your proxy service based on this new config. Finally, package and deploy your API service.
swagger: "2.0"
info:
title: API Services
description: Sample API on Cloud Endpoints with a Cloud Run backend
version: 0.1.0
host: gateway-api-1234abcd.uc.a.run.app
schemes:
- https
produces:
- application/json
x-google-backend:
address: https://example-api-1234abcd.uc.a.run.app
protocol: h2
paths:
/api/status:
get:
summary: Status of the server
operationId: get_status
responses:
"200":
description: Success
schema:
type: string
/api/auth/apikey:
get:
summary: Status of the server
security:
- api_key: []
operationId: get_auth_apikey
responses:
"200":
description: Success
schema:
type: string
/api/auth/bearer:
get:
summary: Status of the server
security:
- bearer: []
operationId: get_auth_bearer
responses:
"200":
description: Success
schema:
type: string
securityDefinitions:
api_key:
type: "apiKey"
name: "key"
in: "query"
bearer:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "email_of_sa"
x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/email_of_sa"
x-google-jwt-locations:
- header: "Authorization"
value_prefix: "Bearer "
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/api/status', methods=['GET'])
def get_status():
return jsonify(message='Server is up!'), 200
@app.route('/api/auth/apikey', methods=['GET'])
def get_auth_apikey():
return jsonify(message='Congratulations. You have completed apikey auth.',
auth='apikey'), 200
@app.route('/api/auth/bearer', methods=['GET'])
def get_auth_sabearer():
return jsonify(message='Congratulations. You have completed service account bearer auth.',
auth='service account bearer'), 200
@app.errorhandler(400)
def bad_request_error(e):
return jsonify(code=400,
message='Bad request',
detailedMessage='{}'.format(e)), 400
@app.errorhandler(404)
def not_found_error(e):
return jsonify(code=404,
message='Path not found',
detailedMessage='{}'.format(e)), 404
@app.errorhandler(405)
def method_not_allowed_error(e):
return jsonify(code=405,
message='Method not allowed',
detailedMessage='{}'.format(e)), 405
@app.errorhandler(Exception)
def internal_server_error(e):
return jsonify(code=500,
message='Internal server error',
detailedMessage='{}'.format(e)), 500
if __name__ == '__main__':
app.run(host='127.0.0.1', port='8080')
Resources
- Getting started with Endpoints for Cloud Run
- Cloud Endpoints architecture
- OpenAPI definition & OpenAPI version 2.0
- Github page for ESPv2
- Terraform, Python and Dockerfile files on Github