Key Takeaways
- Some new challenges need to be addressed when developing microservices architecture, such as scalability, security, and observability.
- Microservicilities provides a list of cross-cutting concerns to implement microservices correctly.
- Kubernetes is an excellent start to implement these microservicilities, but there are some gaps.
- Each service must be deployable independently without requiring a chain of deployment of other services.
- Dockerless tools let you create Linux containers without requiring Docker.
In a microservices architecture, an application is formed by several interconnected services where all of them work together to produce the required business functionality. So a typical enterprise microservices architecture looks like this:
In the beginning, it might seem easy to implement an application using a microservices architecture.
But doing it properly is not an easy journey as there are some challenges to address that weren't present with a monolith architecture.
Some of these are fault tolerance, service discovery, scaling, logging, or tracing, just to mention a few.
To solve these challenges, every microservice should implement what we at Red Hat named Microservicilities.
The term refers to a list of cross-cutting concerns that a service must implement apart from the business logic to resolve these challenges.
These concerns are summarized in the following diagram:
In part one and part two of this series, we covered how to implement all these microservicilities using either Quarkus and Istio, respectively. But there was one remaining "microservicility" not covered in any of the articles, Pipeline.
In a microservice architecture, we should deploy services independently without any kind of deployment orchestration. Having no orchestration among services means that it’s unnecessary to deploy and release the whole application every time, but just a tiny part of it.
Releasing small portions of the application has some advantages:
- Decreases the chance of introducing a break in the application.
- It is easier to deploy and rollback in case of an error.
- You can increase the frequency of deployments to production.
For this reason, each service should have its deployment pipeline so that we may deploy it at any time following its own rules or flow.
Having one deployment pipeline for each service opens some challenges that we need to address:
- How to implement and manage several pipelines.
- How to automate the deployments for all services.
- How to reuse parts of the pipelines across the services yet maintain their independence.
- How to execute them in a cloud environment.
The answer to most of these questions is pipeline-as-code. This technique allows you to create the continuous delivery pipelines as code/file (YAML) to be treated as code.
Since pipelines are defined as code, they should be placed under source control, meaning they may be reusable, branchable, or taggable. And what’s even more important is adding the service code and the delivery pipeline together in the same repository.
Kubernetes is becoming the de-facto tool for deploying microservices. It’s an open-source system for automating, orchestrating, scaling, and managing containers.
As we’ve seen in the two previous articles, three of the ten microservicilities are covered using Kubernetes.
If we use Istio, the other five microservicilities are implemented: discovery, resiliency, authentication, monitoring, and tracing.
Using Kubernetes and Istio is a good idea, but what about Pipeline? How could we implement a Kubernetes-native continuous delivery pipeline? Introducing Tekton as the solution.
Tekton
Tekton is a Kubernetes-native solution for building CI/CD pipelines installed and run as a Kubernetes extension. It offers a set of Kubernetes custom resources to define blocks we can create and reuse for our pipelines.
Entities
Tekton defines the following basic Kubernetes Custom Resource Definitions (CRDs) to build a pipeline:
A PipelineResource
defines referable resources such as source code repositories or container images.
A Task
defines a list of steps executed in sequential order. A step executes commands within a container. A task is a Kubernetes Pod containing as many containers as steps.
A TaskRun
instantiates a Task
for execution with concrete inputs, outputs, and parameters.
A Pipeline
defines a list of tasks to execute in a particular order.
A PipelineRun
instantiates a Pipeline
for execution with concrete inputs, outputs, and parameters. It automatically creates TaskRun
instances for each Task
.
A Task
may be run individually by creating a TaskRun
object or as a part of a Pipeline
.
Installation
Execute the following command to start the cluster:
minikube start -p tekton --kubernetes-version='v1.19.0' --vm-driver='virtualbox' --memory=4096
[istio] minikube v1.17.1 on Darwin 11.3
Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
minikube 1.19.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.19.0
To disable this notice, run: 'minikube config set WantUpdateNotification false'
Using the virtualbox driver based on existing profile
You cannot change the memory size for an exiting minikube cluster. Please first delete the cluster.
Starting control plane node istio in cluster istio
Restarting existing virtualbox VM for "istio" ...
Preparing Kubernetes v1.19.0 on Docker 19.03.12 ...
Verifying Kubernetes components...
Enabled addons: storage-provisioner, default-storageclass
Done! kubectl is now configured to use "tekton" cluster and "" namespace by default
With the Kubernetes cluster up and running, download the tkn
CLI tool to interact with Tekton Pipelines. In this case, we download the tkn
0.18.0 from the release page.
Now let’s install the Tekton controller executing the following command:
kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.24.0/release.yaml
namespace/tekton-pipelines created
podsecuritypolicy.policy/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
role.rbac.authorization.k8s.io/tekton-pipelines-controller created
…
deployment.apps/tekton-pipelines-controller created
service/tekton-pipelines-controller created
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created
deployment.apps/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
Defining a Pipeline
Let’s see how to define a Continuous Delivery Pipeline in Tekton. This pipeline is composed of two tasks. The first task clones the project from GitHub, builds a Java project using Maven (it could be any other build tool or even a different language), creates a container image, and pushes it to a container registry. And a second task deploys the services to a Kubernetes cluster.
But before we start developing the pipeline, let’s begin with a simple "Hello World" task to understand the Tekton concepts.
First Task
Create a task with one step that starts a busybox
container and executes an echo
command within it. Name the file hello-world-task.yml
:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: helloworld
spec:
steps:
- name: sayhello
image: busybox
command:
- echo
args: ['Hello World']
kubectl apply -f src/main/tekton/hello-world-task.yml -n default
task.tekton.dev/helloworld created
Use tkn
to list the currently registered tasks:
tkn task list
NAME DESCRIPTION AGE
helloworld 1 minute ago
We only registered the task at this point. Now we need to instantiate a task to execute it. We may accomplish this by using tkn
CLI or applying a TaskRun
. Create a TaskRun
file with the name hello-world-taskrun.yml
registering the previous task
in the name
field.
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: helloworld-run
spec:
taskRef:
name: helloworld
After we apply this file, the task is triggered.
kubectl apply -f src/main/tekton/hello-world-taskrun.yml
taskrun.tekton.dev/helloworld-run created
Use tkn
to list the current task:
tkn tr list
NAME STARTED DURATION STATUS
helloworld-run 8 seconds ago --- Running(Pending)
A Task
is just a Kubernetes Pod running in our Kubernetes cluster, and each of the steps is a container within the Pod. Execute the following command to get current Pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
helloworld-run-pod-75dwt 0/1 Completed 0 45s
The status of the Pod is completed as the task has finished. It runs one container and the name
follows the terminology defined in the name field in the metadata
section of TaskRun
.
Describe the Pod, and focus on the containers section to get an overview of the containers
started by the task:
kubectl describe pod helloworld-run-pod-75dwt
…
Containers:
step-sayhello:
Container ID: docker://e3bb6b747e6cbb76829e7658b7bf2976f3f09861775a4584eccfba0c143996a6
Image: busybox
...
Finally, we can see the logs of the TaskRun using the tkn
logs command:
tkn tr logs helloworld-run
[sayhello] Hello World
We now have a basic knowledge of Tekton Tasks. Let’s move forward and implement a real pipeline with all required steps.
Pipeline Resource
A PipelineResource
defines locations for input/output resources used by the steps in Tasks. These parameters are usually in the form of a Git URL or Container image qualified name.
Create a new PipelineResource
file configuring the project repository:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: git-source
spec:
type: git
params:
- name: url
value: https://github.com/lordofthejars/hello-world-tekton.git
kubectl apply -f src/main/tekton/git-pipeline-resource.yml -n default
pipelineresource.tekton.dev/git-source created
And create another PipelineResource
to set the container image:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: hello-world-image
spec:
type: image
params:
- name: url
value: quay.io/lordofthejars/hello-world-quarkus-tekton:1.0.0
kubectl apply -f src/main/tekton/container-image-resource.yml
pipelineresource.tekton.dev/hello-world-image created
tkn resource list
NAME TYPE DETAILS
git-source git url: https://github.com/lordofthejars/hello-world-tekton.git
hello-world-image image url: quay.io/lordofthejars/hello-world-quarkus-tekton:1.0.0
Task
Before creating the task, we create a Kubernetes Secret
containing two key/values for Quay credentials: Quay username and Quay password. Replace the username and password values for the correct ones.
kubectl create secret generic quay-credentials --from-literal=quay-username='yyy' --from-literal=quay-password='xxx'
secret/quay-credentials created
Then we create a Task
that builds the project, creates a Linux container image, and pushes it to the container registry (for this example, Quay, but it could be any other).
Since this is a Java project implemented in Quarkus, we take advantage of its integration with Jib to create container images in a Dockerless way. When building container images within a running container (as this is the case of Tekton), we might experience some problems with running a task container within a container (build a new container). This is the reason why it’s important to use Dockerless technology to create a container. Jib is an option for Java projects, but other generic, not language-specific options are available, such as Buildah or Kaniko.
Let’s create a task
that executes the Maven package goal, setting the Quarkus option to build, and push the container image to Quay. Of course, we set the Git repository and container image name as inputs and outputs (PipelineResource
) and Quay username/password as parameters (Kubernetes Secrets
).
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-app
spec:
params:
- name: quay-credentials-secret
type: string
description: name of the secret holding the quay credentials
default: quay-credentials
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: maven-build
image: docker.io/maven:3.6-jdk-11-slim
command:
- mvn
args:
- clean
- package
- -Dquarkus.container-image.push=true
env:
- name: QUARKUS_CONTAINER_IMAGE_IMAGE
value: $(outputs.resources.builtImage.url)
- name: QUARKUS_CONTAINER_IMAGE_USERNAME
valueFrom:
secretKeyRef:
name: $(params.quay-credentials-secret)
key: quay-username
- name: QUARKUS_CONTAINER_IMAGE_PASSWORD
valueFrom:
secretKeyRef:
name: $(params.quay-credentials-secret)
key: quay-password
workingDir: "/workspace/source/"
We see that the Quay credentials’ secret name is set as a parameter in the previous file. Parameter quay-credentials
has a default value with the same value as used in the kubectl creates secret
command.
The input parameter is named source
of type git
.
The output parameter is the name of the container image.
In the env
section, we define some environment variables to configure the Quarkus container image extension to build and push the container image:
- The container image name is defined from an output resource.
- Quay credentials are injected from a Kubernetes
Secret
.
Register the task in the Kubernetes cluster:
kubectl apply -f src/main/tekton/build-push-task.yml
tkn task list
NAME DESCRIPTION AGE
build-app 3 hours ago
Finally, we instantiate this task by creating a TaskRun
linking the input/output resources with the PipelineResource
previously created and setting the secret name as a parameter.
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: build-app-run
spec:
params:
- name: quay-credentials-secret
value: quay-credentials
resources:
inputs:
- name: source
resourceRef:
name: git-source
outputs:
- name: builtImage
resourceRef:
name: hello-world-image
taskRef:
name: build-app
Notice that the resourceRef
field points to the PipelineResource
name defined in the previous section.
When the TaskRun
is applied, the build process starts its execution, cloning the project, building it using Maven, creating and pushing the container image. We may see logs of the current task in real-time by streaming them using tkn
CLI:
tkn tr logs -f
? Select taskrun: [Use arrows to move, type to filter]
> build-app-run started 38 minutes ago
helloworld-run started 1 day ago
[git-source-source-w2ck5] {"level":"info","ts":1620922474.2565205,"caller":"git/git.go:169","msg":"Successfully cloned https://github.com/lordofthejars/hello-world-tekton.git @ ee3edc414c47f2bdeda9cc7c47ac54427d35a9dc (grafted, HEAD) in path /workspace/source"}
[git-source-source-w2ck5] {"level":"info","ts":1620922474.276228,"caller":"git/git.go:207","msg":"Successfully initialized and updated submodules in path /workspace/source"}
...
[maven-build] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-parameter-documenter/2.0.6/maven-plugin-parameter-documenter-2.0.6.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-parameter-documenter/2.0.6/maven-plugin-parameter-documenter-2.0.6.pom (1.9 kB at 58 kB/s)
[maven-build] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/reporting/maven-reporting-api/2.0.6/maven-reporting-api-2.0.6.pom
…
[maven-build] [INFO] [io.quarkus.container.image.jib.deployment.JibProcessor] Pushed container image quay.io/lordofthejars/hello-world-quarkus-tekton:1.0.0 (sha256:e71b0808af36ce3b9b980b2fb83886be2e06439ee454813052a115829e1e727c)
[maven-build] [INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 24202ms
[maven-build] [INFO] ------------------------------------------------------------------------
[maven-build] [INFO] BUILD SUCCESS
[maven-build] [INFO] ------------------------------------------------------------------------
[maven-build] [INFO] Total time: 01:16 min
[maven-build] [INFO] Finished at: 2021-05-13T16:15:52Z
[maven-build] [INFO] ------------------------------------------------------------------------
[image-digest-exporter-88mpr] {"severity":"INFO","timestamp":"2021-05-13T16:15:53.025263494Z","caller":"logging/config.go:116","message":"Successfully created the logger."}
[image-digest-exporter-88mpr] {"severity":"INFO","timestamp":"2021-05-13T16:15:53.025374882Z","caller":"logging/config.go:117","message":"Logging level set to: info"}
[image-digest-exporter-88mpr] {"severity":"INFO","timestamp":"2021-05-13T16:15:53.025508459Z","caller":"imagedigestexporter/main.go:59","message":"No index.json found for: builtImage","commit":"b86a9a2"}
Pipeline
So far, we’ve created a simple task. Still, an actual continuous delivery/deployment pipeline should be composed of several tasks: building and publishing a container image and deploying it to a Kubernetes cluster.
Let’s create a task with two steps:
- The first step updates the Kubernetes
Deployment
file with the container image set in thePipelineResource
. For editing the deployment YAML file, we use the yq tool. Also, we use thescript
section instead ofcommand
to show another way of running commands within a container. - The second step executes the
kubectl
command to deploy the service. - For this task, we require three parameters: the deployment file, the Git repository where the deployment file is stored, and the container image name built in the previous task.
Create a deploy-task.yml
file with the following content:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: kubectl-deploy
spec:
params:
- name: deploymentFile
type: string
description: deployment file location
resources:
inputs:
- name: source
type: git
- name: builtImage
type: image
steps:
- name: update-deployment-file
image: quay.io/lordofthejars/image-updater:1.0.0
script: |
#!/usr/bin/env ash
yq eval -i '.spec.template.spec.containers[0].image = env(DESTINATION_IMAGE)' $DEPLOYMENT_FILE
env:
- name: DESTINATION_IMAGE
value: "$(inputs.resources.builtImage.url)"
- name: DEPLOYMENT_FILE
value: "/workspace/source/$(inputs.params.deploymentFile)"
- name: kubeconfig
image: quay.io/rhdevelopers/tutorial-tools:0.0.3
command: ["kubectl"]
args:
- apply
- -f
- /workspace/source/$(inputs.params.deploymentFile)
Notice that script
content is placed instead of command
in the first step. To pass the Tekton parameters to the script
section, we need to accomplish this through environment variables.
kubectl apply -f src/main/tekton/deploy-task.yml
task.tekton.dev/kubectl-deploy created
tkn task ls
NAME DESCRIPTION AGE
build-app 18 hours ago
helloworld 1 day ago
kubectl-deploy 11 minutes ago
To deploy the service, we need to execute the kubectl
command within a Kubernetes cluster. To make it work, we need to set a Kubernetes Role
to allow the default
service account because this is the service account running the Tekton Pipelines in our example. This role must permit a running container to apply Kubernetes resources.
Create a pipeline-sa-role.yml
file with the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pipeline-extra-role
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- configmaps
- secrets
verbs:
- "*"
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
verbs:
- "*"
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- apps
resources:
- replicasets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pipeline-exta-role-binding
roleRef:
kind: Role
name: pipeline-extra-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: default
Register the role:
kubectl apply -f src/main/tekton/pipeline-sa-role.yml
role.rbac.authorization.k8s.io/pipeline-extra-role created
rolebinding.rbac.authorization.k8s.io/pipeline-exta-role-binding created
We are now ready to create a Tekton Pipeline to combine both tasks. It’s essential to define the pipeline parameters as they’re set when Tekton Tasks are referenced in the pipeline definition. We define the params
value (quay-credentials-secret
and deploymentFile
) and reference pipelineresource
as inputs/outputs in both tasks.
Create a pipeline.yml
file:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: hello-world-pipeline
spec:
resources:
- name: appSource
type: git
- name: containerImage
type: image
tasks:
- name: build-app
taskRef:
name: build-app
params:
- name: quay-credentials-secret
value: quay-credentials
resources:
inputs:
- name: source
resource: appSource
outputs:
- name: builtImage
resource: containerImage
- name: kube-deploy
taskRef:
name: kubectl-deploy
params:
- name: deploymentFile
value: src/main/kubernetes/kubernetes.yml
runAfter:
- build-app
resources:
inputs:
- name: source
resource: appSource
- name: builtImage
resource: containerImage
kubectl apply -f src/main/tekton/pipeline.yml
pipeline.tekton.dev/hello-world-pipeline created
We can list pipelines using tkn
CLI:
tkn pipeline list
NAME AGE LAST RUN STARTED DURATION STATUS
hello-world-pipeline 5 seconds ago --- --- --- ---
Describing the pipeline gives us an overview of the pipeline and what must be defined when running it.
tkn pipeline describe hello-world-pipeline
Name: hello-world-pipeline
Namespace: default
Resources
NAME TYPE
∙ appSource git
∙ containerImage image
Params
No params
Results
No results
Workspaces
No workspaces
Tasks
NAME TASKREF RUNAFTER TIMEOUT CONDITIONS PARAMS
∙ build-app build-app --- --- quay-credentials-secret: quay-credentials
∙ kube-deploy kubectl-deploy build-app --- --- deploymentFile: src/main/kubernetes/deployment.yml
PipelineRuns
No pipelineruns
We now don’t need any TaskRuns
as they are automatically created when a PipelineRun
is applied:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: hello-world-pipeline-run
spec:
resources:
- name: appSource
resourceRef:
name: git-source
- name: containerImage
resourceRef:
name: hello-world-image
pipelineRef:
name: hello-world-pipeline
In a PipelineRun
, we set the link between a PipelineResources
and Pipeline
, as it was executing with a TaskRun
.
kubectl apply -f src/main/tekton/pipeline-run.yml
pipelinerun.tekton.dev/hello-world-pipeline-run created
Similar to TaskRun, we may stream the logs of a pipeline by executing the following command:
tkn pipeline logs -f
…
[kube-deploy : git-source-source-sdgld] {"level":"info","ts":1620978300.0659564,"caller":"git/git.go:169","msg":"Successfully cloned https://github.com/lordofthejars/hello-world-tekton.git @ b954dbc68e0aa7e4cfb6defeff00b1e4ded2889c (grafted, HEAD) in path /workspace/source"}
[kube-deploy : git-source-source-sdgld] {"level":"info","ts":1620978300.0972755,"caller":"git/git.go:207","msg":"Successfully initialized and updated submodules in path /workspace/source"}
[kube-deploy : kubeconfig] deployment.apps/hello-world created
If the build succeeded, the service would be deployed on the current Kubernetes cluster. Get all the Pods to see what happened on the cluster:
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-59d47597c-qm9nr 1/1 Running 0 36m
hello-world-pipeline-run-build-app-xmvg2-pod-v4vps 0/4 Completed 0 37m
hello-world-pipeline-run-kube-deploy-bfqhd-pod-x68xh 0/3 Completed 0 36m
We see two Pods have completed. The hello-world-pipeline-run-build-app
Pod is the build-app
task that clones, builds, tests the service, and finally creates and pushes the container image. The hello-world-pipeline-run-kube-deploy
Pod is the kube-deploy
task deploying the application.
Apart from the tasks, there is also a running Pod. This is the application Pod deployed during the pipeline execution.
Conclusions
Developing and implementing a microservices architecture is a bit more challenging than developing a monolith application. We believe that microservicilities can drive you to create services correctly in terms of the application infrastructure.
In part one and part two of this series, we saw how to implement microservicilities using Quarkus or Istio, respectively, but we didn’t cover the Pipeline microservicility. This article demonstrated how to implement a basic continuous delivery pipeline using Tekton, a Kubernetes-native solution for building CI/CD pipelines.
One of the significant advantages of Tekton is the ability to create container images in the same cluster where they are deployed. This reduces the friction we might find when containers are built in some machines and deployed in others. Another advantage is how a pipeline is defined using YAML files. In this way, pipelines are stored alongside source code, making them branchable, taggable, or versioning.
Sometimes, it isn’t necessary to define tasks from scratch because, in the Tekton Catalog, we may find tasks ready to be used. Moreover, if you need to develop custom tasks, design them as open as possible using parameters and input/output resources. In this way, tasks may be reused within your organization to solve similar problems.
This article is just an introduction to Tekton. We’ve seen a pipeline is run when a PipelineRun
object is created in the cluster, but a trigger/event may also instantiate a Pipeline. An event can be, for example, a push to a particular branch in a GitHub repository. Triggers are an advanced topic, and you can learn more here.
Source code demonstrated in this article may be found on this GitHub repository. Source code for part one and part two of this series may be found here and here, respectively.
About the Author
Alex Soto is a Director of Developer Experience at Red Hat. He is passionate about the Java world, software automation, and he believes in the open-source software model. Soto is the co-author of Manning | Testing Java Microservices and O’Reilly | Quarkus Cookbook and contributor to several open-source projects. A Java Champion since 2017, he is also an international speaker and teacher at Salle URL University. You can follow him on Twitter (Alex Soto ⚛️) to stay tuned to what’s going on in Kubernetes and Java world.