The Google Cloud team have made the Google Cloud Config Connector generally available. Once installed into a Kubernetes cluster, it allows users to configures services, such as databases and virtual machines, as if they were native Kubernetes resources.
The Config Connector is a Kubernetes operator that relies on custom resource definitions (CRDs) that map to Google Cloud Platform (GCP) resources. With this, Google states that you can "manage your Google Cloud infrastructure the same way you manage your Kubernetes applications, reducing the complexity and cognitive load for developers." Why does this matter? In a blog post about the release of the Config Connector, Google says that it's about infrastructure consistency across app your application components.
[...] applications that run on Kubernetes often use resources that live outside of Kubernetes, for example, Cloud SQL or Cloud Storage, and those resources typically don’t use the same approach to configuration. This can cause friction between teams, and force developers into frequent "context switching". Further, configuring and operating those applications is a multi-step process: configuring the external resources, then the Kubernetes resources, and finally making the former available to the latter.
Kubernetes operates using a control loop; accepting declarative definitions of a desired state, creating the defined resources, and then managing those resources by continually reconciling the current state with the desired state. The Config Connector takes non-Kubernetes resources and lets users leverage the Kubernetes experience to provision and manage them. While the Config Connector can be installed into any Kubernetes cluster—running in GCP or anywhere—the supported resources are all GCP services. These GCP resources include Big Query, Cloud Bigtable, Compute Engine, Firestore, Pub/Sub, Memorystore, Cloud Spanner, Cloud SQL, and Cloud Storage.
Service definitions for these Google Cloud resources go into YAML files, like any other Kubernetes resource. For example, the following YAML creates a Cloud Spanner instance.
apiVersion: spanner.cnrm.cloud.google.com/v1beta1
kind: SpannerInstance
metadata:
labels:
label-one: "value-one"
name: spannerinstance-sample
spec:
config: regional-us-west1
displayName: Spanner Instance Sample
numNodes: 1
As with other Kubernetes resources, an updated definition results in an updated resource. Janakiram MSV at the New Stack showed how to create a Cloud SQL database, and also move it to a new geography by modifying the resource definition.
Similar to how Kubernetes objects are modified with "kubectl apply", GCP resources can also be updated. For example, you can modify the YAML file to change the zone of the SQL database instance and apply the new definition. This action calls the corresponding Cloud SQL API to move the database to the new zone.
In addition to desired state management, users inherit other Kubernetes experiences for their custom resources. This includes role-based access control (RBAC), event visibility, secret storage for sensitive values, and cross-resource dependency management that's eventually consistent.
This isn't the first attempt to unify management of fundamentally different types of resources. Terraform also treats infrastructure as code, and provides a consistent workflow to provision and manage a variety of Google Cloud services, in addition to other cloud vendor services. Traditional config management products like Chef also support a consistent approach to Google Cloud services. And vendor-neutral standards like the Open Service Broker API emerged to help application developers provision and attach to service instances. Google appears to be betting on a Kubernetes-driven standardization in the long-term, and has deprecated their own platform-neutral service broker.