Skip to content

Commit

Permalink
Kubernetes deployment strategies explained
Browse files Browse the repository at this point in the history
  • Loading branch information
seifrajhi committed Jan 26, 2024
1 parent c0f0b56 commit 1d4c673
Show file tree
Hide file tree
Showing 63 changed files with 2,859 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/e2e-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,13 @@ on:
push:
branches:
- main
paths:
- 'retail-store-sample-app/**'
pull_request:
branches:
- main
paths:
- 'retail-store-sample-app/**'
workflow_dispatch:

jobs:
Expand Down
2 changes: 2 additions & 0 deletions k8s-deployment-strategies/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
app/vendor
.DS_Store
130 changes: 130 additions & 0 deletions k8s-deployment-strategies/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
Kubernetes deployment strategies
================================

> In Kubernetes there are a few different ways to release an application, you have
to carefully choose the right strategy to make your infrastructure resilient.

- [recreate](recreate/): terminate the old version and release the new one
- [ramped](ramped/): release a new version on a rolling update fashion, one
after the other
- [blue/green](blue-green/): release a new version alongside the old version
then switch traffic
- [canary](canary/): release a new version to a subset of users, then proceed
to a full rollout
- [a/b testing](ab-testing/): release a new version to a subset of users in a
precise way (HTTP headers, cookie, weight, etc.). This doesn’t come out of the
box with Kubernetes, it imply extra work to setup a smarter
loadbalancing system (Istio, Linkerd, Traeffik, custom nginx/haproxy, etc).
- [shadow](shadow/): release a new version alongside the old version. Incoming
traffic is mirrored to the new version and doesn't impact the
response.

![deployment strategy decision diagram](decision-diagram.png)

Before experimenting, checkout the following resources:
- [CNCF presentation](https://www.youtube.com/watch?v=1oPhfKye5Pg)
- [CNCF presentation slides](https://www.slideshare.net/EtienneTremel/kubernetes-deployment-strategies-cncf-webinar)
- [Kubernetes deployment strategies](https://container-solutions.com/kubernetes-deployment-strategies/)
- [Six Strategies for Application Deployment](https://thenewstack.io/deployment-strategies/).
- [Canary deployment using Istio and Helm](https://github.com/etiennetremel/istio-cross-namespace-canary-release-demo)
- [Automated rollback of Helm releases based on logs or metrics](https://container-solutions.com/automated-rollback-helm-releases-based-logs-metrics/)

## Getting started

These examples were created and tested on [Minikube](http://github.com/kubernetes/minikube)
running with Kubernetes v1.25.2 and [Rancher Desktop](https://rancherdesktop.io/) running
with Kubernetes 1.23.6.

On MacOS the hypervisor VM does not have external connectivity so docker image pulls
will fail. To resolve this, install another driver such as
[VirtualBox](https://www.virtualbox.org/) and add `--vm-driver virtualbox`
to the command to be able to pull images.

```
$ minikube start --kubernetes-version v1.25.2 --memory 8192 --cpus 2
```

## Visualizing using Prometheus and Grafana

The following steps describe how to setup Prometheus and Grafana to visualize
the progress and performance of a deployment.

### Install Helm3

To install Helm3, follow the instructions provided on their
[website](https://github.com/kubernetes/helm/releases).

### Install Prometheus

```
$ helm install prometheus prometheus-community/prometheus \
--create-namespace --namespace=monitoring
```

### Install Grafana

```
$ helm install grafana \
--namespace=monitoring \
--set=adminUser=admin \
--set=adminPassword=admin \
--set=service.type=NodePort \
grafana/grafana
```

### Setup Grafana

Now that Prometheus and Grafana are up and running, you can access Grafana:

```
$ minikube service grafana
```

To login, username: `admin`, password: `admin`.

Then you need to connect Grafana to Prometheus, to do so, add a DataSource:

```
Name: prometheus
Type: Prometheus
Url: http://prometheus-server
Access: Server
```

Create a dashboard with a Time series or import
the [JSON export](grafana-dashboard.json). Use the following query:

```
sum(rate(http_requests_total{app="my-app"}[2m])) by (version)
```

Since we installed Prometheus with default settings, it is using the default scrape
interval of `1m` so the range cannot be lower than that.

To have a better overview of the version, add `{{version}}` in the legend field.

#### Example graph

Recreate:

![Kubernetes deployment recreate](recreate/grafana-recreate.png)

Ramped:

![Kubernetes deployment ramped](ramped/grafana-ramped.png)

Blue/Green:

![Kubernetes deployment blue-green](blue-green/grafana-blue-green.png)

Canary:

![Kubernetes deployment canary](canary/grafana-canary.png)

A/B testing:

![kubernetes ab-testing deployment](ab-testing/grafana-ab-testing.png)

Shadow:

![kubernetes shadow deployment](shadow/grafana-shadow.png)
127 changes: 127 additions & 0 deletions k8s-deployment-strategies/ab-testing/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
A/B testing using Istio
=======================

> Version B is released to a subset of users under specific condition.
![kubernetes ab-testing deployment](grafana-ab-testing.png)

A/B testing deployments consists of routing a subset of users to a new
functionality under specific conditions. It is usually a technique for making
business decisions based on statistics rather than a deployment strategy.
However, it is related and can be implemented by adding extra functionality to a
canary deployment so we will briefly discuss it here.

This technique is widely used to test conversion of a given feature and only
roll-out the version that converts the most.

Here is a list of conditions that can be used to distribute traffic amongst the
versions:

- Weight
- Cookie value
- Query parameters
- Geolocalisation
- Technology support: browser version, screen size, operating system, etc.
- Language

## Steps to follow

1. version 1 is serving HTTP traffic using Istio
1. deploy version 2
1. wait until all instances are ready
1. update Istio VirtualService with 90% traffic targetting version 1 and 10%
traffic targetting version 2

## In practice

Before starting, it is recommended to know the basic concept of the
[Istio routing API](https://istio.io/blog/2018/v1alpha3-routing/).

### Deploy Istio

In this example, Istio 1.13.4 is used. To install Istio, follow the
[instructions](https://istio.io/latest/docs/setup/install/helm/) from the
Istio website.

Automatic sidecar injection should be enabled by default. Then annotate the
default namespace to enable it.

```
$ kubectl label namespace default istio-injection=enabled
```

### Deploy both applications

Back to the a/b testing directory from this repo, deploy both applications using
the istioctl command to inject the Istio sidecar container which is used to
proxy requests:

```
$ kubectl apply -f app-v1.yaml -f app-v2.yaml
```

Expose both services via the Istio Gateway and create a VirtualService to match
requests to the my-app-v1 service:

```
$ kubectl apply -f ./gateway.yaml -f ./virtualservice.yaml
```

At this point, if you make a request against the Istio ingress gateway with the
given host `my-app.local`, you should only see version 1 responding:

```
$ curl $(minikube service istio-ingressgateway -n istio-system --url | head -n1) -H 'Host: my-app.local'
Host: my-app-v1-6d577d97b4-lxn22, Version: v1.0.0
```

### Shift traffic based on weight

Apply the Istio VirtualService rule based on weight:

```
$ kubectl apply -f ./virtualservice-weight.yaml
```

You can now test if the traffic is correctly splitted amongst both versions:

```
$ service=$(minikube service istio-ingressgateway -n istio-system --url | head -n1)
$ while sleep 0.1; do curl "$service" -H 'Host: my-app.local'; done
```

You should approximately see 1 request on 10 ending up in the version 2.

In the `./virtualservice-weight.yaml` file, you can edit the weight of each
destination and apply the updated rule to Minikube:

```
$ kubectl apply -f ./virtualservice-weight.yaml
```

### Shift traffic based on headers

Apply the Istio VirtualService rule based on headers:

```
$ kubectl apply -f ./virtualservice-match.yaml
```

You can now test if the traffic is hitting the correct set of instances:

```
$ service=$(minikube service istio-ingressgateway -n istio-system --url | head -n1)
$ curl $service -H 'Host: my-app.local' -H 'X-API-Version: v1.0.0'
Host: my-app-v1-6d577d97b4-s4h6k, Version: v1.0.0
$ curl $service -H 'Host: my-app.local' -H 'X-API-Version: v2.0.0'
Host: my-app-v2-65f9fdbb88-jtctt, Version: v2.0.0
```

### Cleanup

```
$ kubectl delete gateway/my-app virtualservice/my-app
$ kubectl delete -f ./app-v1.yaml -f ./app-v2.yaml
$ kubectl delete -f <PATH-TO-ISTIO>/install/kubernetes/istio-demo.yaml
```
58 changes: 58 additions & 0 deletions k8s-deployment-strategies/ab-testing/app-v1.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
apiVersion: v1
kind: Service
metadata:
name: my-app-v1
labels:
app: my-app
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
version: v1.0.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
version: v1.0.0
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
58 changes: 58 additions & 0 deletions k8s-deployment-strategies/ab-testing/app-v2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
apiVersion: v1
kind: Service
metadata:
name: my-app-v2
labels:
app: my-app
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
version: v2.0.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
version: v2.0.0
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
Loading

0 comments on commit 1d4c673

Please sign in to comment.