Daniel's Blog

Adding Google Cloud Platform Ingress to a service

I wanted the ingress service to have https, have google manage the cert for me, and auto redirect to https. there are several steps to accomplishing this:

  1. Create a static ip address
  2. Redirect the DNS entry for the external domain name to the new static/public/external IP. Each cloud service refers to them differently. I like the term static ip as it is a reserved ip that doesn't change and is available externally.
  3. Create a frontend config for the external domain name
  4. Create a managed certificate for the external domain name
  5. (Optionally) Create a backend config for a custom health check.
  6. Annotate the service with the backend config, frontend config, managed cert, and static ip.
  7. Make sure everything is running. There are delays with the dns redirect, then with the cert issuing.

Creating a static/public IP

Go to Networking->VPC Networking->IP Addresses

Choose "Reserve External IP"

Name: <service>-<domain>-<tld>
Description: Static IP for <Service> running on GKE cluster <cluster>
Network Service Tier: Premium
IP Version: IPV4
Type: Global

If I choose standard tier it will result in errors where the Static IP Address was not found.

Redirect DNS

I logged into godaddy. Their site is always changing, but basically I needed to go to the DNS management.

I added an 'A' Record for the external domain that pointed to the IP I just created.

Type: A
Name: <subdomain name>
Data: <ip address>
TTL: 1 hour 

Now it's time to wait until it starts resolving. It can take 24 hours for the whole world, but I only need to wait about two minutes for google to see it. In this case the domain previously existed with a TTL of 1 hour, so I expect the different registrars to take at least 1 hour to check the entry again.

Create a FrontendConfig

In my Helm Chart's values I have a variable named KUBERNETES_ENV. which allows me to change the chart depending on which Kubernetes cluster the chart is deployed to. This allows different API namespaces for different kubernetes versions, as well as custom resource definitions. FrontendConfig is a GKE custom resource and it's really simple. I just enable redirect to https and here it is.

{{ if ( eq .Values.KUBERNETES_ENV "gke") }}
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: service-frontend-config
spec:
  redirectToHttps:
    enabled: true
    responseCodeName: FOUND
{{ end }}

Create a ManagedCert

The next part is to create a managed certificate. I want all traffic going over https so a https certificate is needed. I could use letsencrypt to generate a managed cert, but it is easier to use google in the simple case. Once again it is a Custom Resource definition so it's easy to crate

{{ if ( eq .Values.KUBERNETES_ENV "gke") }}
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: service-managed-cert
spec:
  domains: 
    - {{ .Values.EXTERNAL_HOSTS }}
  
{{ end }}

Create a BackendConfig

There are two levels of health checks when using Google Cloud Load Balancing. The first is the Load Balancer and the second is the Kubernetes Cluster.

They have different capabilities and are configured differently. This is for the Google Cloud Load Balancing.

In this case the service I'm setting up will return a "302 Found" when querying the / endpoint and redirect to the /login endpoint. This will fail a the health check as it must return a 200. So to have /login endpoint used I created this BackendConfig

{{ if ( eq .Values.KUBERNETES_ENV "gke") }}
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: help-backend-config
spec:
  healthCheck:
    checkIntervalSec: 15
    port: 80
    type: HTTP
    requestPath: /login
{{ end }}

Annotate the service

The Cloud Load Balancer that Google Cloud Platform creates needs to know which service it is talking to so the service is annotated to use that frontend config. This allows for liveness probes to run. The Forwarding rules will only work if the backend service is running and responding.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  labels:
    ...
    
   {{ if ( eq .Values.KUBERNETES_ENV "gke") }}
  annotations: 
    networking.gke.io/managed-certificates: service-managed-cert
    networking.gke.io/v1beta1.FrontendConfig: service-frontend-config
    # Delete if there is no backend config. 
    cloud.google.com/backend-config: help-backend-config
    kubernetes.io/ingress.global-static-ip-name: service-domain-com
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.allow-http: "true"
    
    {{ end }}

spec:
  ... 

Troubleshooting

the given static IP name doesn't translate to an existing static IP.

Ihe Static IP is probably a standard/regional not premium/global.

The Google Cloud Logging health check is failing

This can happen because the endpoint of the health check is wrong. The health check endpoint must return a 200. Not a 302 redirect or anything. Logging can be turned on. Then make another change and save again, then a log entry might be triggered. It is very iffy and it is a shame that one can't be run manually. Even after turning it on, the logs didn't show multiple health checks at every interval. It was very odd.