Provision Kubernetes Clusters using Config Connector: MCI part 2

Provision Kubernetes Clusters using Config Connector: MCI part 2

Disclaimer: This blog contains opinions about Google technology. While I work at Google, this is my personal blog. Opinions stated here are my own, not those of my company.

In the first part of the post on multi-cluster ingress, we covered simpler options of exposing your cluster to receive external traffic: ClusterIP, NodePort, LoadBalancer, Ingress. In this part, we’ll provision Kubernetes clusters using Config Connector.

First of all, let us create a project and cluster that will have Config Connector installed. You can start with and modify the snippet below. Most importantly, substitute your [PROJECT_ID] and [BILLING_ACCOUNT]. You can also skip the part that is creating a project, if you already have one.

Secondly, we are going to create 2 clusters – in North America and Europe. Create a yaml file following the example below, and use kubectl apply to initialize this configuration:

apiVersion: container.cnrm.cloud.google.com/v1alpha2
kind: ContainerCluster
metadata:
  name: cluster-na
spec:
  ipAllocationPolicy:
    useIpAliases: true
  location: us-central1-a
  initialNodeCount: 1
  removeDefaultNodePool: false
  loggingService: none
  masterAuth:
    username: "user"
    password: "password12345678"
    clientCertificateConfig:
      issueClientCertificate: false
---
apiVersion: container.cnrm.cloud.google.com/v1alpha2
kind: ContainerCluster
metadata:
  name: cluster-eu
spec:
  ipAllocationPolicy:
    useIpAliases: true
  location: europe-west2-a
  initialNodeCount: 1
  removeDefaultNodePool: false
  loggingService: none
  masterAuth:
    username: "user"
    password: "password12345678"
    clientCertificateConfig:
      issueClientCertificate: false

For your reference, your can find more Cluster creation samples in the Config Connector Samples repo. For the comprehensive list of all the settings, check crds.yaml in your install-bundle repo (this is an example from my installation).

After we created the clusters, you can see them in your Cloud Shell UI. Note, that only 2 of the clusters have managed-by-cnrm label. This label indicates that the resources are managed by Config Connector. This is because the original cluster, that hosts Config Connector extention, was created directly.

Two regional clusters managed by Config Connector.

As a side note, CNRM stands for Cloud Native Resource Management.

Multi-regional K8s workloads

Let us switch kubectl context to our North American cluster.

gcloud container clusters get-credentials cluster-na --zone=us-central1-a

Next, use kubectl apply to configure a deployment and a service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-app-na-deployment
  labels:
    app: node-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: node-app
  template:
    metadata:
      labels:
        app: node-app
    spec:
      containers:
      - name: node-app-container
        image: bulankou/node-hello-world
        env:
        - name: HELLO_MESSAGE
          value: "Hello from North America!"
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 10
          failureThreshold: 10
          successThreshold: 1
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 10
          failureThreshold: 20
          successThreshold: 1
---
apiVersion: v1
kind: Service
metadata:
  name: node-app-backend
  annotations:
    cloud.google.com/neg: '{"exposed_ports": {"80":{}}}'
spec:
  type: ClusterIP
  selector:
    app: node-app
  ports:
  - port: 80
    targetPort: 8080

Similarly, let us switch the context to and configure Europe cluster:

gcloud container clusters get-credentials cluster-eu --zone=europe-west2-a

This is the configuration for run on Europe cluster. As you can see, that the only difference is the message, that we are sending to the service, via the environment variable.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-app-eu-deployment
  labels:
    app: node-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: node-app
  template:
    metadata:
      labels:
        app: node-app
    spec:
      containers:
      - name: node-app-container
        image: bulankou/node-hello-world
        env:
        - name: HELLO_MESSAGE
          value: "Hello from Europe!"
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 10
          failureThreshold: 10
          successThreshold: 1
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 10
          failureThreshold: 20
          successThreshold: 1
---
apiVersion: v1
kind: Service
metadata:
  name: node-app-backend
  annotations:
    cloud.google.com/neg: '{"exposed_ports": {"80":{}}}'
spec:
  type: ClusterIP
  selector:
    app: node-app
  ports:
  - port: 80
    targetPort: 8080

Verify that the pod is running:

$ kubectl exec -it <one_of_the_pods_from_kubectl_get_pods> curl localhost:8080
Hello from Europe!

Note, that for both services, we used the annotation to automatically create Network Endpoint Groups (NEGs). In the next post, we will connect these NEGs to our backend service.

That’s it for today! In this post we went through the steps, showing how you can provision Kubernetes clusters using Config Connector. We will continue setting up Kubernetes multi-cluster ingress with Config Connector in the next post.

Leave a Comment