diff --git a/kube-developper-workshop/.devcontainer/devcontainer.json b/kube-developper-workshop/.devcontainer/devcontainer.json new file mode 100644 index 00000000..553f30f4 --- /dev/null +++ b/kube-developper-workshop/.devcontainer/devcontainer.json @@ -0,0 +1,29 @@ +{ + "image": "ubuntu:latest", + "features": { + "kubectl-helm-minikube": { + "version": "latest", + "helm": "latest", + "minikube": "none" + }, + "common": { + "username": "vscode", + "uid": "1000", + "gid": "1000", + "installZsh": true, + "installOhMyZsh": true + }, + "azure-cli": "latest", + "ghcr.io/eliises/devcontainer-features/bash-profile": { + "command": "source <(kubectl completion bash); alias k=kubectl; complete -o default -F __start_kubectl k; alias kubens='kubectl config set-context --current --namespace ';", + "file": "/etc/bash.bashrc" + } + }, + "remoteUser": "vscode", + "extensions": [ + "yzhang.markdown-all-in-one", + "DavidAnson.vscode-markdownlint", + "streetsidesoftware.code-spell-checker", + "ms-kubernetes-tools.vscode-kubernetes-tools" + ] +} \ No newline at end of file diff --git a/kube-developper-workshop/.vscode/settings.json b/kube-developper-workshop/.vscode/settings.json new file mode 100644 index 00000000..80ab5ba9 --- /dev/null +++ b/kube-developper-workshop/.vscode/settings.json @@ -0,0 +1,9 @@ +{ + "[markdown]": { + "editor.formatOnSave": true, + "editor.defaultFormatter": "esbenp.prettier-vscode" + }, + "yaml.schemas": { + "https://json.schemastore.org/github-workflow.json": "file:///home/ben/dev/kube-workshop/.github/workflows/build.yaml" + } +} diff --git a/kube-developper-workshop/00-pre-reqs/readme.md b/kube-developper-workshop/00-pre-reqs/readme.md new file mode 100644 index 00000000..73ff611d --- /dev/null +++ b/kube-developper-workshop/00-pre-reqs/readme.md @@ -0,0 +1,163 @@ +# βš’οΈ Workshop Pre Requisites + +As this is a completely hands on workshop, you will need several things before you can start: + +- Access to an Azure Subscription where you can create resources. +- bash or a bash compatible shell (e.g. zsh), please do not attempt to use PowerShell or cmd. +- A good editor, and [VS Code](https://code.visualstudio.com/) is strongly recommended + - [Kubernetes extension](https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools) also highly recommended. +- [Azure CLI](https://aka.ms/azure-cli) +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) +- [helm](https://helm.sh/docs/intro/install/) + +## Install dependencies + +The above listed tools are already set up in `.devcontainer` folder located in the git repository of this workshop: . +If you've never used Dev Containers, check out [developing inside a Container using Visual Studio Code Remote Development](https://code.visualstudio.com/docs/devcontainers/containers). + +### Install dependencies manually + +Alteratively you can can install the dependencies yourself by following the steps below. + +#### 🌩️ Install Azure CLI + +To set-up the Azure CLI on your system, install it in one of the below ways. + +On Ubuntu/Debian Linux, requires sudo: + +```bash +curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash +``` + +On MacOS, use homebrew: + +```bash +brew update && brew install azure-cli +``` + +If the commands above don't work, please refer to: [https://aka.ms/azure-cli](https://aka.ms/azure-cli) + +#### ⛑️ Install Helm & Kubectl + +
+Install Helm & Kubectl - Linux (Ubuntu/Debian) + +Two ways are provided for each tool, one without needing sudo, the other requires sudo, take your pick but don't run both! + +By default the 'no sudo' commands for helm & kubectl install binaries into `~/.local/bin` so if this isn't in your PATH you can copy or move the binary elsewhere, or simply run `export PATH="$PATH:$HOME/.local/bin"` + +```bash +# Install kubectl - no sudo +curl -s https://raw.githubusercontent.com/benc-uk/tools-install/master/kubectl.sh | bash + +# Install kubectl - with sudo +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +chmod +x ./kubectl +sudo mv ./kubectl /usr/bin/kubectl + +# Install helm - no sudo +curl -s https://raw.githubusercontent.com/benc-uk/tools-install/master/helm.sh | bash + +# Install helm - with sudo +curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash +``` + +
+ +
+Install Helm & Kubectl - MacOS + +```bash +# Install kubectl - with sudo +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" +chmod +x ./kubectl +sudo mv ./kubectl /usr/local/bin/kubectl + +# Install Helm +curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash +``` + +
+ +#### βš™οΈ Set up bash profile + +Set up the user bash profile for K8s to make it easier to run all the commands + +```sh +echo "source <(kubectl completion bash)" >> ~/.bashrc +echo "alias k=kubectl" >> ~/.bashrc +echo "complete -o default -F __start_kubectl k" >> ~/.bashrc +echo "export PATH=$PATH:/home/azureuser/.local/bin" >> ~/.bashrc +``` + +To have `.bashrc` changes take affect in your current terminal, you must reload `.bashrc` with: + +```sh +. ~/.bashrc +``` + +## βœ… Verify installation + +Double check that everything in installed and working correctly with: + +```sh +# Try commands with tab completion +k get pods -A +helm +az +``` + +## πŸ” Login to Azure + +The rest of this workshop assumes you have access to an Azure subscription, and have the Azure CLI +working & signed into the tenant & subscription you will be using. Some Azure CLI commands to help you: + +- `az login` or `az login --tenant {TENANT_ID}` - Login to the Azure CLI, use the `--tenant` switch + if you have multiple accounts. +- `az account set --subscription {SUBSCRIPTION_ID}` - Set the subscription the Azure CLI will use. +- `az account show -o table` - Show the subscription the CLI is configured to use. + +## 😒 Stuck? + +Getting all the tools set up locally is the highly recommended path to take, if you are stuck there +are some other options to explore, but these haven't been tested: + +- Use the [Azure Cloud Shell](https://shell.azure.com/bash) which has all of these tools except VS Code, + a simple web code editor is available. However if you download the + [VS Code server](https://aka.ms/install-vscode-server/setup.sh), then run that from inside Cloud Shell + you can get access to the full web based version of VS Code. +- Go to the [repo for this workshop on GitHub](https://github.com/benc-uk/kube-workshop/codespaces) + and start a new Codespace from it, you should get a terminal you can use and have all the tools available. + Only available if you have access to GitHub Codespaces. + +## πŸ’² Variables File + +Although not essential, it's advised to create a `vars.sh` file holding all the parameters that will +be common across many of the commands that will be run. This way you have a single point of reference +for them and they can be easily reset in the event of a session timing out or terminal closing. + +Sample `vars.sh` file is shown below, feel free to use any values you wish for the resource group, region cluster name etc. + +> Note: The ACR name must be globally unique and cannot contain hyphens, dots, or underscores. + +```bash +RES_GROUP="kube-workshop" +REGION="westeurope" +AKS_NAME="__change_me__" +ACR_NAME="__change_me__" +KUBE_VERSION="1.27.1" +``` + +> Note: New versions of Kubernetes are released all the time, and eventually older versions are removed from Azure. Rather than constantly update this guide the following command can be used to get the latest version: `az aks get-versions --location "westeurope" -o tsv --query "orchestrators[-1].orchestratorVersion"` + +To use the file simply source it through bash with the below command, do this before moving to the next stage. + +```sh +source vars.sh +``` + +It's worth creating a project folder locally (or even a git repo) at this point, in order to keep your work in, you haven't done so already. We'll be creating & editing files later + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– [Next Section ⏩](../01-cluster/readme.md) diff --git a/kube-developper-workshop/00-pre-reqs/vars.sh.sample b/kube-developper-workshop/00-pre-reqs/vars.sh.sample new file mode 100644 index 00000000..f2a9b488 --- /dev/null +++ b/kube-developper-workshop/00-pre-reqs/vars.sh.sample @@ -0,0 +1,5 @@ +RES_GROUP="kube-workshop" +REGION="westeurope" +AKS_NAME="__change_me__" +ACR_NAME="__change_me__" +KUBE_VERSION="1.25.5" \ No newline at end of file diff --git a/kube-developper-workshop/01-cluster/readme.md b/kube-developper-workshop/01-cluster/readme.md new file mode 100644 index 00000000..aee62957 --- /dev/null +++ b/kube-developper-workshop/01-cluster/readme.md @@ -0,0 +1,88 @@ +# 🚦 Deploying Kubernetes + +Deploying AKS and Kubernetes can be extremely complex, with many networking, compute and other aspects to consider. +However for the purposes of this workshop, a default and basic cluster can be deployed very quickly. + +## πŸš€ AKS Cluster Deployment + +The following commands can be used to quickly deploy an AKS cluster: + +```bash +# Create Azure resource group +az group create --name $RES_GROUP --location $REGION + +# Create cluster +az aks create --resource-group $RES_GROUP \ + --name $AKS_NAME \ + --location $REGION \ + --node-count 2 --node-vm-size Standard_B2ms \ + --kubernetes-version $KUBE_VERSION \ + --verbose \ + --no-ssh-key +``` + +In case you get an error when creating cluster, `Version x.xx.x is not supported in this region.`, run the following to get the supported kubernetes version + +```sh +az aks get-versions --location $REGION -o table +``` + +And re-run the create cluster command with supported version number. + +This should take around 5 minutes to complete, and creates a new AKS cluster with the following +characteristics: + +- Two small B-Series _Nodes_ in a single node pool. _Nodes_ are what your workloads will be running on. +- Basic 'Kubenet' networking, which creates an Azure network and subnet etc for us. [See docs if you wish to learn more about this topic](https://docs.microsoft.com/azure/aks/operator-best-practices-network) +- Local cluster admin account, with RBAC enabled, this means we don't need to worry about setting up users or assigning roles etc. +- AKS provide a wide range of 'turn key' addons, e.g. monitoring, AAD integration, auto-scaling, GitOps etc, however we'll not require for any of these to be enabled. +- The use of SSH keys is skipped with `--no-ssh-key` as they won't be needed. + +The `az aks create` command has [MANY options](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create) +however you shouldn't need to change or add any options, with some small exceptions: + +- You may wish to change the size or number of nodes, however this clearly has cost implications. + +## πŸ”Œ Connect to the Cluster + +To enable `kubectl` (and other tools) to access the cluster, run the following: + +```bash +az aks get-credentials --name $AKS_NAME --resource-group $RES_GROUP +``` + +This will create Kubernetes config file in your home directory `~/.kube/config` which is the default location, used by `kubectl`. + +Now you can run some simple `kubectl` commands to validate the health and status of your cluster: + +```bash +# Get all nodes in the cluster +kubectl get nodes + +# Get all pods in the cluster +kubectl get pods --all-namespaces +``` + +Don't be alarmed by all the pods you see running in the 'kube-system' namespace. These are deployed by default by AKS and perform management & system tasks we don't need to worry about. +You can still consider your cluster "empty" at this stage. + +## ⏯️ Appendix - Stopping & Starting the Cluster + +If you are concerned about the costs for running the cluster you can stop and start it at any time. +This essentially stops the node VMs in Azure, meaning the costs for the cluster are greatly reduced. + +```bash +# Stop the cluster +az aks stop --resource-group $RES_GROUP --name $AKS_NAME + +# Start the cluster +az aks start --resource-group $RES_GROUP --name $AKS_NAME +``` + +> πŸ“ NOTE: Start and stop operations do take several minutes to complete, so typically you would +> perform them only at the start or end of the day. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../00-pre-reqs/readme.md) β€– [Next Section ⏩](../02-container-registry/readme.md) diff --git a/kube-developper-workshop/02-container-registry/readme.md b/kube-developper-workshop/02-container-registry/readme.md new file mode 100644 index 00000000..1098186f --- /dev/null +++ b/kube-developper-workshop/02-container-registry/readme.md @@ -0,0 +1,91 @@ +# πŸ“¦ Container Registry & Images + +We will deploy & use a private registry to hold the application container images. This is not strictly +necessary as we could pull the images directly from the public, however using a private registry is +a more realistic approach. + +[Azure Container Registry](https://docs.microsoft.com/azure/container-registry/) is what we will be +using. + +## πŸš€ ACR Deployment + +Deploying a new ACR is very simple: + +```bash +az acr create --name $ACR_NAME --resource-group $RES_GROUP \ +--sku Standard \ +--admin-enabled true +``` + +> πŸ“ NOTE: When you pick a name for the resource with $ACR_NAME, this has to be **globally unique**, and not contain any underscores, dots or hyphens. +> Name must also be in lowercase. + +## πŸ“₯ Importing Images + +For the sake of speed and maintaining the focus on Kubernetes we will import pre-built images from another public registry (GitHub Container Registry), rather than build them from source. + +We will cover what the application does and what these containers are for in the next section, for +now we can just import them. + +To do so we use the `az acr import` command: + +```bash +# Import application frontend container image +az acr import --name $ACR_NAME --resource-group $RES_GROUP \ +--source ghcr.io/benc-uk/smilr/frontend:stable \ +--image smilr/frontend:stable + +# Import application data API container image +az acr import --name $ACR_NAME --resource-group $RES_GROUP \ +--source ghcr.io/benc-uk/smilr/data-api:stable \ +--image smilr/data-api:stable +``` + +If you wish to check and see imported images, you can go over to the ACR resource in the Azure portal, and into the 'Repositories' section. + +> πŸ“ NOTE: we are not using the tag `latest` which is a common mistake when working with Kubernetes +> and containers in general. + +## πŸ”Œ Connect AKS to ACR - as Azure Subscription Owner + +Kuberenetes requires a way to authenticate and access images stored in private registries. +There are a number of ways to enable Kubernetes to pull images from a private registry, however AKS provides a simple way to configure this through the Azure CLI. +The downside is this requires you to have 'Owner' permission within the subscription, in order to assign the role. + +```bash +az aks update --name $AKS_NAME --resource-group $RES_GROUP --attach-acr $ACR_NAME +``` + +If you are curious what this command does, it essentially is just assigning the "ACR Pull" role in Azure IAM to the managed identity used by AKS, on the ACR resource. + +If you see the following error `Could not create a role assignment for ACR. Are you an Owner on this subscription?`, you will need to proceed to the alternative approach below. + +## πŸ”Œ Connect AKS to ACR - Alternative + +If you do not have Azure Owner permissions, you will need to fall back to an alternative approach. +This involves two things: + +- Adding an _Secret_ to the cluster containing the credentials to pull images from the ACR. +- Including a reference to this _Secret_ in every _Deployment_ you create or update the _ServiceAccount_ + used by the _Pods_ to reference this _Secret_. + +Run these commands to create the _Secret_ with the ACR credentials, and patch the default _ServiceAccount_: + +```bash +kubectl create secret docker-registry acr-creds \ + --docker-server=$ACR_NAME.azurecr.io \ + --docker-username=$ACR_NAME \ + --docker-password=$(az acr credential show --name $ACR_NAME --query "passwords[0].value" -o tsv) + +kubectl patch serviceaccount default --patch '"imagePullSecrets": [{"name": "acr-creds" }]' +``` + +> πŸ’₯ IMPORTANT! Do NOT follow this approach of patching the default _ServiceAccount_ in production or a cluster running real workloads, treat this as a simplifying workaround. + +These two commands introduce a lot of new Kubernetes concepts in one go! Don't worry about them for +now, some of this such as _Secrets_ we'll go into later. If the command is successful, move on. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../01-cluster/readme.md) β€– [Next Section ⏩](../03-the-application/readme.md) diff --git a/kube-developper-workshop/03-the-application/architecture.png b/kube-developper-workshop/03-the-application/architecture.png new file mode 100644 index 00000000..171a8fe0 Binary files /dev/null and b/kube-developper-workshop/03-the-application/architecture.png differ diff --git a/kube-developper-workshop/03-the-application/readme.md b/kube-developper-workshop/03-the-application/readme.md new file mode 100644 index 00000000..a6bc0e85 --- /dev/null +++ b/kube-developper-workshop/03-the-application/readme.md @@ -0,0 +1,35 @@ +# ❇️ Overview Of The Application + +This section simply serves as an introduction to the application, there are no tasks to be carried out. + +The application is called 'Smilr' and provides users with a way to rate and provide feedback on events +and other sessions (e.g. hacks, meetups) they have attended. In addition administrators have way to +configure events and view the feedback that has been provided. + +## [πŸ“ƒ Smilr - GitHub Repo & Project](https://github.com/benc-uk/smilr) + +Screenshot: + + + +The application consists of some lightweight microservices and single page application, it is written +in Node.js + Express and [Vue.js](https://vuejs.org/). The design follows the classic pattern for +running single page apps: + +- A frontend service serving static content + configuration API. +- A "backend" data API service for the frontend to consume. +- A MongoDB datastore/database for persisting state. + +![Architecture](./architecture.png) + +For this workshop the app will be deployed with the following requirements: + +- Both the API and frontend need to be **exposed to the public internet**. Both use HTTP as a protocol. +- The **MongoDB datastore will not be exposed but will run inside the cluster**. Typically you would **NOT** run stateful services inside of the cluster like this, but this is done in the interests of speed and to demonstrate some principals. +- The sentiment service is optional and **won't be deployed**. +- Authentication and API security will disabled and the app will run in "demo mode." + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../02-container-registry/readme.md) β€– [Next Section ⏩](../04-deployment/readme.md) diff --git a/kube-developper-workshop/03-the-application/screenshot.png b/kube-developper-workshop/03-the-application/screenshot.png new file mode 100644 index 00000000..9e62a181 Binary files /dev/null and b/kube-developper-workshop/03-the-application/screenshot.png differ diff --git a/kube-developper-workshop/04-deployment/data-api-deployment.yaml b/kube-developper-workshop/04-deployment/data-api-deployment.yaml new file mode 100644 index 00000000..cb7231c0 --- /dev/null +++ b/kube-developper-workshop/04-deployment/data-api-deployment.yaml @@ -0,0 +1,28 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: {ACR_NAME}.azurecr.io/smilr/data-api:stable + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + env: + - name: MONGO_CONNSTR + value: mongodb://admin:supersecret@{MONGODB_POD_IP} diff --git a/kube-developper-workshop/04-deployment/diagram.png b/kube-developper-workshop/04-deployment/diagram.png new file mode 100644 index 00000000..82b884b2 Binary files /dev/null and b/kube-developper-workshop/04-deployment/diagram.png differ diff --git a/kube-developper-workshop/04-deployment/mongo-deployment.yaml b/kube-developper-workshop/04-deployment/mongo-deployment.yaml new file mode 100644 index 00000000..988f492c --- /dev/null +++ b/kube-developper-workshop/04-deployment/mongo-deployment.yaml @@ -0,0 +1,30 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + value: supersecret diff --git a/kube-developper-workshop/04-deployment/readme.md b/kube-developper-workshop/04-deployment/readme.md new file mode 100644 index 00000000..b8de682b --- /dev/null +++ b/kube-developper-workshop/04-deployment/readme.md @@ -0,0 +1,187 @@ +# πŸš€ Deploying The Backend + +We'll deploy the app piece by piece, and at first we'll deploy & configure things in a sub-optimal way. +This is in order to explore the Kubernetes concepts and show their purpose. Then we'll iterate and improve towards the final architecture. + +We have three "microservices" we need to deploy, and due to dependencies between them we'll start with the MongoDB database then the data API and then move onto the frontend. + +From here we will be creating and editing files, so it's worth creating a project folder locally (or even a git repo) in order to work from if you haven't done so already. + +## πŸƒ Deploying MongoDB + +We'll apply configurations to Kubernetes using `kubectl` and YAML manifest files, and we'll be doing this a lot throughout the workshop. +These files will describe the objects we want to create, modify and delete in the cluster. + +If you want to take this workshop slowly and treat it as more of a hack, you can research and build +the required YAML yourself, you can use [the Kubernetes docs](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) and the following hints: + +- _Deployment_ should be used with a single replica. +- The image to be run is `mongo:5.0`. Note: This is not really part of our app, so we pull it from the [public MongoDB image hosted on Dockerhub](https://hub.docker.com/_/mongo), not the ACR we set up. +- The port **27017** should be exposed from the container. +- Do not worry about persistence or using a _Service_ at this point. +- Pass `MONGO_INITDB_ROOT_USERNAME` and `MONGO_INITDB_ROOT_PASSWORD` environmental vars to the container setting the username to "admin" and password to "supersecret". + +Alternatively you can use the YAML below to paste into `mongo-deployment.yaml`, don't worry this isn't cheating, in the real world everyone is too busy to write Kubernetes manifests from scratch πŸ˜‰ + +
+Click here for the MongoDB deployment YAML + +```yaml +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + value: supersecret +``` + +
+
+ +Then apply the manifest with: + +```bash +kubectl apply -f mongo-deployment.yaml +``` + +If successful you will see `deployment.apps/mongodb created`, this will have created one _Deployment_ and one _Pod_. You can check the status of your cluster with a few commands: + +- `kubectl get deployment` - List the deployments, you should see _1/1_ in ready status. +- `kubectl get pod` - List the pods, you should see one prefixed `mongodb-` with a status of _Running_. +- `kubectl describe deploy mongodb` - Examine and get details of the deployment. +- `kubectl describe pod {podname}` - Examine the pod, you will need to get the name from the `get pod` + command. +- `kubectl get all` - List everything; all pods, deployments etc. + +Get used to these commands you will use them a LOT when working with Kubernetes. + +For the next part we'll need the IP address of the pod that was just deployed, you can get this by running `kubectl get pod -o wide` or the command below: + +```bash +kubectl describe pod --selector app=mongodb | grep ^IP: +``` + +## πŸ—ƒοΈ Deploying The Data API + +Next we'll deploy the first custom part of our app, the data API, and we'll deploy it from an image hosted in our private registry. + +- The image needs to be `{ACR_NAME}.azurecr.io/smilr/data-api:stable` where `{ACR_NAME}` should be + replaced in the YAML with your real value, i.e. the name of your ACR resource. +- Set the number of replicas to **2**. +- The port exposed from the container should be **4000**. +- An environmental variable called `MONGO_CONNSTR` should be passed to the container, with the connection string to connect to the MongoDB, which will be `mongodb://admin:supersecret@{MONGODB_POD_IP}` where `{MONGODB_POD_IP}` should be replaced in the YAML with the pod IP address you just queried. +- Label the pods with `app: data-api`. + +Again you can try building the _Deployment_ yourself or use the provided YAML to create a `data-api-deployment.yaml` file + +
+Click here for the DataAPI deployment YAML + +```yaml +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: {ACR_NAME}.azurecr.io/smilr/data-api:stable + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + env: + - name: MONGO_CONNSTR + value: mongodb://admin:supersecret@{MONGODB_POD_IP} +``` + +
+
+ +**πŸ’₯ Notice:** We have the password in plain text within the connection string! This clearly is a very bad practice, we will fix this at a later stage when we introduce Kubernetes _Secrets_. + +Make the changes described above, **remember to make the edits, you can not use this YAML as is**, +and then run: + +```bash +kubectl apply -f data-api-deployment.yaml +``` + +Check the status as before with `kubectl` and it's worth checking the logs with `kubectl logs {podname}` to see the output from the app as it starts up. + +This time we've set the number of replicas to two, if you run `kubectl get pods -o wide` you will see which _Nodes_ the _Pods_ have been scheduled (assigned) to. +You should see each _Pod_ has been scheduled to different _Nodes_, but this is not guaranteed. _Pod_ scheduling and placement is a fairly complex topic, for now we can move on. + +## ⏩ Accessing the Data API (The quick & dirty way) + +Now it would be nice to access and call this API, to check it's working. But the IP address of the +_Pods_ are private and only accessible from within the cluster. In the next section we'll fix that, +but for now there's a short-cut we can use. + +Kubernetes provides a way to "tunnel" network traffic into the cluster through the control plane, +this is done with the `kubectl port-forward` command. + +Pick the name of either one of the two `data-api` _Pods_, and run: + +```bash +kubectl port-forward {pod_name} 4000:4000 +``` + +And then accessing the following URL [http://localhost:4000/api/info](http://localhost:4000/api/info) either in your browser or with `curl` we should see a JSON response with some status and debug +information from the API. + +```sh +curl http://localhost:4000/api/info | json_pp +``` + +Clearly this isn't a good way to expose your apps long term, but can be extremely useful when debugging and triaging issues. + +When done, cancel the port-forwarding with `ctrl-c` + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +The resources deployed into the cluster & in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../03-the-application/readme.md) β€– [Next Section ⏩](../05-network-basics/readme.md) diff --git a/kube-developper-workshop/05-network-basics/data-api-deployment.yaml b/kube-developper-workshop/05-network-basics/data-api-deployment.yaml new file mode 100644 index 00000000..8854443b --- /dev/null +++ b/kube-developper-workshop/05-network-basics/data-api-deployment.yaml @@ -0,0 +1,28 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: {ACR_NAME}.azurecr.io/smilr/data-api:stable + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + env: + - name: MONGO_CONNSTR + value: mongodb://admin:supersecret@database diff --git a/kube-developper-workshop/05-network-basics/data-api-service.yaml b/kube-developper-workshop/05-network-basics/data-api-service.yaml new file mode 100644 index 00000000..55cbb974 --- /dev/null +++ b/kube-developper-workshop/05-network-basics/data-api-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: LoadBalancer + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 diff --git a/kube-developper-workshop/05-network-basics/diagram.png b/kube-developper-workshop/05-network-basics/diagram.png new file mode 100644 index 00000000..605e889d Binary files /dev/null and b/kube-developper-workshop/05-network-basics/diagram.png differ diff --git a/kube-developper-workshop/05-network-basics/mongo-service.yaml b/kube-developper-workshop/05-network-basics/mongo-service.yaml new file mode 100644 index 00000000..139811d1 --- /dev/null +++ b/kube-developper-workshop/05-network-basics/mongo-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: database + +spec: + type: ClusterIP + selector: + app: mongodb + ports: + - protocol: TCP + port: 27017 + targetPort: 27017 diff --git a/kube-developper-workshop/05-network-basics/readme.md b/kube-developper-workshop/05-network-basics/readme.md new file mode 100644 index 00000000..8e1a2730 --- /dev/null +++ b/kube-developper-workshop/05-network-basics/readme.md @@ -0,0 +1,140 @@ +# 🌐 Basic Networking + +Pods are both ephemeral and "mortal", they should be considered effectively transient. +Kubernetes can terminate and reschedule pods for a whole range of reasons, including rolling updates, hitting resource limits, scaling up & down and other cluster operations. +With Pods being transient, you can not build a reliable architecture through addressing Pods directly (e.g. by name or IP address). + +Kubernetes solves this with _Services_, which act as a network abstraction over a group of pods, and have their own lifecycle. +We can use them to greatly improve what we've deployed. + +## 🧩 Deploy MongoDB Service + +Now to put a _Service_ in front of the MongoDB pods, if you want to create the service YAML yourself, you can [refer to the Kubernetes docs](https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service). + +- The type of _Service_ should be `ClusterIP` which means it's internal to the cluster only +- The service port should be **27017**. +- The target port should be **27017**. +- Selector decides what pods are behind the service, in this case use the label `app` and the value + `mongodb`. + +> πŸ“ NOTE: Labels are optional metadata that can be added to any object in Kubernetes, they are simply key-value pairs. Labels can be used to organize and to select subsets of objects. +> The label "app" is commonly used, but has **no special meaning**, and isn't used by Kubernetes in any way + +Save your YAML into a file `mongo-service.yaml` or use the below YAML manifest for the service: + +
+Click here for the MongoDB service YAML + +```yaml +kind: Service +apiVersion: v1 + +metadata: + # We purposefully pick a different name for the service from the deployment + name: database + +spec: + type: ClusterIP + selector: + app: mongodb + ports: + - protocol: TCP + port: 27017 + targetPort: 27017 +``` + +
+ +Apply it to the cluster as before: + +```bash +kubectl apply -f mongo-service.yaml +``` + +You can use `kubectl` to examine the status of the _Service_ just like you can with _Pods_ and _Deployments_: + +```bash +# Get all services +kubectl get svc + +# Get details of a single service +kubectl describe svc {service-name} +``` + +> πŸ“ NOTE: The service called 'kubernetes' exists in every namespace and is placed there automatically, you can ignore it. + +πŸ›‘ **IMPORTANT NOTE**: As a rule it's a bad idea and generally considered an "anti-pattern" to run stateful services in Kubernetes. Managing them is complex and time consuming. +It's **strongly recommended** use PaaS data offerings which reside outside your cluster and can be managed independently and easily. +We will continue with MongoDB running in the cluster purely as a learning exercise. + +## πŸ“‘ Connect the API to MongoDB Service + +Now we have a Service in our cluster for MongoDB we can access the database using DNS rather than pod IP and if the pod(s) die or restart or move; this name remains constant. +DNS with Kubernetes is a complex topic we won't get into here, the main takeaway for now is: + +- Every _Service_ in the cluster can be resolved over DNS +- Within a _Namespace_, the _Service_ name will resolve as a simple hostname, without the need for a + DNS suffix [but other scenarios also are supported](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/). + +Edit the the `data-api-deployment.yaml` file you created previously and change the value of the +`MONGO_CONNSTR` environmental variable. Replace the IP address with name of the service, e.g. the +connection string should look like `mongodb://admin:supersecret@database`. + +You can update the active deployment with these changes by re-running `kubectl apply -f data-api-deployment.yaml`. +Kuberenetes will perform a rolling update, if you are quick and run `kubectl get pods` you might see it taking place, i.e. a new pod starting & the old one terminating. +Again you can check the status and the logs using `kubectl`. + +## 🌍 Expose the Data API externally + +We can create a different type of _Service_ in front of the data API, in order to expose it outside of the cluster and also to the internet. +To do this use a Service with the type `LoadBalancer`, this will be picked up by Azure and a public IP assigned and traffic routed through an Azure LoadBalancer in front of the cluster. +How this happens is well outside of the scope of this workshop. + +We can also change the port at the _Service_ level, so the port exposed by the _Service_ doesn't need to match the one that the container is listening on. In this case we'll re-map the port to **80**. + +Save your YAML into a file `data-api-service.yaml` from above or below. + +
+Click here for the data API service YAML + +```yaml +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: LoadBalancer + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 +``` + +
+ +Apply it to the cluster as before: + +```bash +kubectl apply -f data-api-service.yaml +``` + +Using `kubectl get svc` check the status and wait for the external IP to be assigned, which might take a minute or two. +Then go to the address in your browser `http://{EXTERNAL_IP}/api/info/` and you should get the same JSON response as before. + +Clearly this is better than what we had before, but in production you would never expose traffic directly into your pods like this. +Later we can improve this yet further, but for now it will suffice. + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +The resources deployed into the cluster & in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../04-deployment/readme.md) β€– [Next Section ⏩](../06-frontend/readme.md) diff --git a/kube-developper-workshop/06-frontend/diagram.png b/kube-developper-workshop/06-frontend/diagram.png new file mode 100644 index 00000000..64c4d4ce Binary files /dev/null and b/kube-developper-workshop/06-frontend/diagram.png differ diff --git a/kube-developper-workshop/06-frontend/frontend-deployment.yaml b/kube-developper-workshop/06-frontend/frontend-deployment.yaml new file mode 100644 index 00000000..9f755e78 --- /dev/null +++ b/kube-developper-workshop/06-frontend/frontend-deployment.yaml @@ -0,0 +1,28 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: http://{API_EXTERNAL_IP}/api diff --git a/kube-developper-workshop/06-frontend/frontend-service.yaml b/kube-developper-workshop/06-frontend/frontend-service.yaml new file mode 100644 index 00000000..7926a607 --- /dev/null +++ b/kube-developper-workshop/06-frontend/frontend-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: LoadBalancer + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 diff --git a/kube-developper-workshop/06-frontend/readme.md b/kube-developper-workshop/06-frontend/readme.md new file mode 100644 index 00000000..ec8f6dd2 --- /dev/null +++ b/kube-developper-workshop/06-frontend/readme.md @@ -0,0 +1,113 @@ +# πŸ’» Adding The Frontend + +We've ignored the frontend until this point, with the API and backend in place we are finally ready to deploy it. +We need to use a _Deployment_ and _Service_ just as before. We can pick up the pace a little and setup everything we need in one go. + +For the Deployment: + +- The image needs to be `{ACR_NAME}.azurecr.io/smilr/frontend:stable`. +- The port exposed from the container should be **3000**. +- An environmental variable called `API_ENDPOINT` should be passed to the container, this needs to be a URL and should point to the external IP of the API from the previous part, as follows `http://{API_EXTERNAL_IP}/api`. +- Label the pods with `app: frontend`. + +For the Service: + +- The type of _Service_ should be `LoadBalancer` same as the data API. +- The service port should be **80**. +- The target port should be **3000**. +- Use the label `app` and the value `frontend` for the selector. + +You might like to try creating the service before deploying the pods to see what happens. +The YAML you can use for both, is provided below: + +`frontend-deployment.yaml`: + +
+Click here for the frontend deployment YAML + +```yaml +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: http://{API_EXTERNAL_IP}/api +``` + +
+
+ +`frontend-service.yaml`: + + +
+Click here for the frontend service YAML + +```yaml +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: LoadBalancer + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 +``` + +
+
+ +As before, the there are changes that are required to the supplied YAML, replacing anything inside `{ }` with a corresponding real value. + +## πŸ’‘ Accessing and Using the App + +Once the two YAMLs have been applied: + +- Check the external IP for the frontend is assigned with `kubectl get svc frontend`. +- Once it is there, go to that IP in your browser, e.g. `http://{frontend-ip}/` - the application should load and the Smilr frontend is shown. + +If you want to spend a few minutes using the app, you can go to the "Admin" page, add a new event, the details don't matter but make the date range to include the current date. +And try out the feedback view and reports. Or simply be happy the app is functional and move on. + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +The resources deployed into the cluster & in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +Notice we have **two public IPs**, the `LoadBalancer` service type is not an instruction to Azure to deploy an entire Azure Load Balancer. +Instead it's used to create a new public IP and assign it to the single Azure Load Balancer (created by AKS) that sits in front of the cluster. +We'll refine this later when we look at setting up an ingress. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../05-network-basics/readme.md) β€– [Next Section ⏩](../07-improvements/readme.md) diff --git a/kube-developper-workshop/07-improvements/data-api-deployment.yaml b/kube-developper-workshop/07-improvements/data-api-deployment.yaml new file mode 100644 index 00000000..64a5b3fe --- /dev/null +++ b/kube-developper-workshop/07-improvements/data-api-deployment.yaml @@ -0,0 +1,46 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: {ACR_NAME}.azurecr.io/smilr/data-api:stable + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + port: 4000 + path: /api/health + initialDelaySeconds: 0 + periodSeconds: 5 + + env: + - name: MONGO_CONNSTR + valueFrom: + secretKeyRef: + name: mongo-creds + key: connection-string diff --git a/kube-developper-workshop/07-improvements/frontend-deployment.yaml b/kube-developper-workshop/07-improvements/frontend-deployment.yaml new file mode 100644 index 00000000..e57a66b1 --- /dev/null +++ b/kube-developper-workshop/07-improvements/frontend-deployment.yaml @@ -0,0 +1,43 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + ports: + - containerPort: 3000 + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 + + env: + - name: API_ENDPOINT + value: http://{API_EXTERNAL_IP}/api diff --git a/kube-developper-workshop/07-improvements/mongo-deployment.yaml b/kube-developper-workshop/07-improvements/mongo-deployment.yaml new file mode 100644 index 00000000..8e9cf387 --- /dev/null +++ b/kube-developper-workshop/07-improvements/mongo-deployment.yaml @@ -0,0 +1,48 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password diff --git a/kube-developper-workshop/07-improvements/readme.md b/kube-developper-workshop/07-improvements/readme.md new file mode 100644 index 00000000..4c8a7658 --- /dev/null +++ b/kube-developper-workshop/07-improvements/readme.md @@ -0,0 +1,137 @@ +# ✨ Improving The Deployment + +We've cut more than a few corners so far in order to simplify things and introduce concepts one at a time, now is a good time to make some simple improvements. +We'll also pick up the pace a little with slightly less hand holding. + +## 🌑️ Resource Requests & Limits + +We have not given Kubernetes any information on the resources (CPU & memory) our applications require, but we can do this two ways: + +- **Resource requests**: Used by the Kubernetes scheduler to help assign _Pods_ to a node with sufficient resources. + This is only used when starting & scheduling pods, and not enforced after they start. +- **Resource limits**: _Pods_ will be prevented from using more resources than their assigned limits. + These limits are enforced and can result in a _Pod_ being terminated. It's highly recommended to set limits to prevent one workload from monopolizing cluster resources and starving other workloads. + +It's worth reading the [Kubernetes documentation on this topic](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/), +especially on the units & specifiers used for memory and CPU. + +You can specify resources of these within the pod template inside the Deployment YAML. The `resources` section needs to go at the same level as `image`, `ports`, etc. in the spec. + +```yaml +# Resources to set on frontend & data API deployment +resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi +``` + +```yaml +# Resources to set on MongoDB deployment +resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi +``` + +> πŸ“ NOTE: If you were using VS Code to edit your YAML and had the Kubernetes extension installed you might have noticed yellow warnings in the editor. +> The lack of resource limits was the cause of this. + +Add these sections to your deployment YAML files, and reapply to the cluster with `kubectl` as before and check the status and that the pods start up. + +## πŸ’“ Readiness & Liveness Probes + +Probes are Kubernetes' way of checking the health of your workloads. There are two main types of probe: + +- **Liveness probe**: Checks if the _Pod_ is alive, _Pods_ that fail this probe will be **_terminated and restarted_** +- **Readiness probe**: Checks if the _Pod_ is ready to **_accept traffic_**, _Services_ only sends traffic to _Pods_ which are in a ready state. + +You can read more about probes at the [kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). +Also [this blog post](https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html) has some excellent advice around probes, and covers some of the pitfalls of using them, particularly liveness probes. + +For this workshop we'll only set up a readiness probe, which is the most common type: + +```yaml +# Probe to add to the data API deployment in the same level as above +# Note: this container exposes a specific health endpoint +readinessProbe: + httpGet: + port: 4000 + path: /api/health + initialDelaySeconds: 0 + periodSeconds: 5 +``` + +```yaml +# Probe to add to the frontend deployment +readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 +``` + +Add these sections to your deployment YAML files, at the same level in the YAML as the resources block. +Reapply to the cluster with `kubectl` as before, and check the status and that the pods start up. + +If you run `kubectl get pods` immediately after the apply, you should see that the pods status will be "Running", but will show "0/1" in the ready column, until the probe runs & passes for the first time. + +## πŸ” Secrets + +Remember how we had the MongoDB password visible in plain text in two of our deployment YAML manifests? +Blergh! 🀒 Now is the time to address that, we can create a Kubernetes _Secret_, which is a configuration resource which can be used to store sensitive information. + +_Secrets_ can be created using a YAML file just like every resource in Kubernetes, but instead we'll use the `kubectl create` command to imperatively create the resource from the command line, as follows: + +```bash +kubectl create secret generic mongo-creds \ +--from-literal admin-password=supersecret \ +--from-literal connection-string=mongodb://admin:supersecret@database +``` + +_Secrets_ can contain multiple keys, here we add two keys one for the password called `admin-password`, +and one for the connection string called `connection-string`, both reside in the new _Secret_ called `mongo-creds`. + +_Secrets_ can use used a number of ways, but the easiest way is to consume them, is as environmental variables passed into your containers. +Update the deployment YAML for your data API, and MongoDB, replace the references to `MONGO_INITDB_ROOT_PASSWORD` and `MONGO_CONNSTR` as shown below: + +```yaml +# Place this in MongoDB deployment, replacing existing reference to MONGO_INITDB_ROOT_PASSWORD +- name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password +``` + +```yaml +# Place this in data API deployment, replacing existing reference to MONGO_CONNSTR +- name: MONGO_CONNSTR + valueFrom: + secretKeyRef: + name: mongo-creds + key: connection-string +``` + +> πŸ“ NOTE: _Secrets_ are encrypted at rest by AKS however anyone with the relevant access to the cluster will be able to read the _Secrets_ (they are simply base-64 encoded) using kubectl or the Kubernetes API. +> If you want further encryption and isolation a number of options are available including Mozilla SOPS, Hashicorp Vault and Azure Key Vault. + +## πŸ” Reference Manifests + +If you get stuck and want working manifests you can refer to, they are available here: + +- [data-api-deployment.yaml](data-api-deployment.yaml) +- [frontend-deployment.yaml](frontend-deployment.yaml) +- [mongo-deployment.yaml](mongo-deployment.yaml) + - Bonus: This manifest shows how to add a probe using an executed command, rather than HTTP, use it if you wish, but it's optional. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../06-frontend/readme.md) β€– [Next Section ⏩](../08-helm-ingress/readme.md) diff --git a/kube-developper-workshop/08-helm-ingress/data-api-service.yaml b/kube-developper-workshop/08-helm-ingress/data-api-service.yaml new file mode 100644 index 00000000..7b14e241 --- /dev/null +++ b/kube-developper-workshop/08-helm-ingress/data-api-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: ClusterIP + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 diff --git a/kube-developper-workshop/08-helm-ingress/diagram.png b/kube-developper-workshop/08-helm-ingress/diagram.png new file mode 100644 index 00000000..dbb27ef5 Binary files /dev/null and b/kube-developper-workshop/08-helm-ingress/diagram.png differ diff --git a/kube-developper-workshop/08-helm-ingress/frontend-deployment.yaml b/kube-developper-workshop/08-helm-ingress/frontend-deployment.yaml new file mode 100644 index 00000000..eb7e5584 --- /dev/null +++ b/kube-developper-workshop/08-helm-ingress/frontend-deployment.yaml @@ -0,0 +1,43 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: /api diff --git a/kube-developper-workshop/08-helm-ingress/frontend-service.yaml b/kube-developper-workshop/08-helm-ingress/frontend-service.yaml new file mode 100644 index 00000000..cb7e0dda --- /dev/null +++ b/kube-developper-workshop/08-helm-ingress/frontend-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: ClusterIP + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 diff --git a/kube-developper-workshop/08-helm-ingress/ingress.yaml b/kube-developper-workshop/08-helm-ingress/ingress.yaml new file mode 100644 index 00000000..fad00288 --- /dev/null +++ b/kube-developper-workshop/08-helm-ingress/ingress.yaml @@ -0,0 +1,29 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress + +metadata: + name: my-app + labels: + name: my-app + +spec: + host: + ingressClassName: nginx + rules: + - http: + paths: + - pathType: Prefix + path: "/" + backend: + service: + name: frontend + port: + number: 80 + + - pathType: Prefix + path: "/api" + backend: + service: + name: data-api + port: + number: 80 diff --git a/kube-developper-workshop/08-helm-ingress/kuberntes-ingress.png b/kube-developper-workshop/08-helm-ingress/kuberntes-ingress.png new file mode 100644 index 00000000..df2043fc Binary files /dev/null and b/kube-developper-workshop/08-helm-ingress/kuberntes-ingress.png differ diff --git a/kube-developper-workshop/08-helm-ingress/readme.md b/kube-developper-workshop/08-helm-ingress/readme.md new file mode 100644 index 00000000..0da78967 --- /dev/null +++ b/kube-developper-workshop/08-helm-ingress/readme.md @@ -0,0 +1,178 @@ +# 🌎 Helm & Ingress + +For this section we'll touch on two slightly more advanced topics, the key ones being the use of Helm and introducing an ingress controller to our cluster. +The ingress will let us further refine & improve the networking aspects of the app we've deployed. + +## πŸ—ƒοΈ Namespaces + +So far we've worked in a single _Namespace_ called `default`, but Kubernetes allows you create additional _Namespaces_ in order to logically group and separate your resources. + +> πŸ“ NOTE: Namespaces do not provide a network boundary or isolation of workloads, and the underlying resources (Nodes) remain shared. +> There are ways to achieve these outcomes, but is well beyond the scope of this workshop. + +Create a new namespace called `ingress`: + +```bash +kubectl create namespace ingress +``` + +Namespaces are simple idea but they can trip you up, you will have to add `--namespace` or `-n` to any `kubectl` commands you want to use against a particular namespace. +The following alias can be helpful to set a namespace as the default for all `kubectl` commands, meaning you don't need to add `-n`, think of it like a Kubernetes equivalent of the `cd` command. + +```bash +# Note the space at the end +alias kubens='kubectl config set-context --current --namespace ' +``` + +## ⛑️ Introduction to Helm + +[Helm is an CNCF project](https://helm.sh/) which can be used to greatly simplify deploying applications to Kubernetes, either applications written and developed in house, or external 3rd party software & tools. + +- Helm simplifies deployment into Kubernetes using _charts_, when a chart is deployed it is refereed to as a _release_. +- A _chart_ consists of one or more Kubernetes YAML templates + supporting files. +- Helm charts support dynamic parameters called _values_. Charts expose a set of default _values_ through their `values.yaml` file, and these _values_ can be set and over-ridden at _release_ time. +- The use of _values_ is critical for automated deployments and CI/CD. +- Charts can referenced through the local filesystem, or in a remote repository called a _chart repository_. + The can also be kept in a container registry but that is an advanced and experimental topic. +- To use Helm, the Helm CLI tool `helm` is required. + +Well add the Helm chart repository for the ingress we will be deploying, this is done with the `helm repo` command. +This is a public repo & chart of the extremely popular NGINX ingress controller (more on that below). + +> πŸ“ NOTE: The repo name `ingress-nginx` can be any name you wish to pick, but the URL has to be pointing to the correct place. + +```bash +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx + +helm repo update +``` + +## πŸš€ Deploying The Ingress Controller + +An [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) provides a reliable and secure way to route HTTP and HTTPS traffic into your cluster and expose your applications from a single point of ingress; hence the name. + +![Ingress controller diagram showing routing of traffic to backend services](./kuberntes-ingress.png) + +- The controller is simply an instance of a HTTP reverse proxy running in one or mode _Pods_ with a _Service_ in front of it. +- It implements the [Kubernetes controller pattern](https://kubernetes.io/docs/concepts/architecture/controller/#controller-pattern) +scanning for _Ingress_ resources to be created in the cluster, when it finds one, it reconfigures itself based on the rules and configuration within that _Ingress_, in order to route traffic. +- There are [MANY ingress controllers available](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers) but we will use a very common and simple one, the [NGINX ingress controller](https://kubernetes.github.io/ingress-nginx/) maintained by the Kubernetes project. +- Often TLS is terminated by the ingress controller, and sometimes other tasks such as JWT validation for authentication can be done at this level. + For the sake of this workshop no TLS & HTTPS will be used due to the dependencies it requires (such as DNS, cert management etc). + +Helm greatly simplifies setting this up, down to a single command. Run the following: + +```bash +helm install my-ingress ingress-nginx/ingress-nginx \ + --namespace ingress \ + --set controller.replicaCount=2 +``` + +- The release name is `my-ingress` which can be anything you wish, it's often used by chart templates to prefix the names of created resources. +- The second parameter is a reference to the chart, in the form of `repo-name/chart-name`, if we wanted to use a local chart we'd simply reference the path to the chart directory. +- The `--set` part is where we can pass in values to the release, in this case we increase the replicas to two, purely as an example. + +Check the status of both the pods and services with `kubectl get svc,pods --namespace ingress`, ensure the pods are running and the service has an external public IP. + +You can also use the `helm` CLI to query the status, here's some simple and common commands: + +- `helm ls` or `helm ls -A` - List releases or list releases in all namespaces. +- `helm upgrade {release-name} {chart}` - Upgrade/update a release to apply changes. Add `--install` + to perform an install if the release doesn't exist. +- `--dry-run` - Add this switch to install or upgrade commands to get a view of the resources and + YAML that would be created, without applying them to the cluster. +- `helm get values {release-name}` - Get the values that were used to deploy a release. +- `helm delete {release-name}` - Remove the release and all the resources. + +## πŸ”€ Reconfiguring The App With Ingress + +Now we can modify the app we've deployed to route through the new ingress, but a few simple changes +are required first. As the ingress controller will be routing all requests, the services in front of +the deployments should be switched back to internal i.e. `ClusterIP`. + +- Edit both the data API & frontend **service** YAML manifests, change the service type to `ClusterIP` + then reapply with `kubectl apply` +- Edit the frontend **deployment** YAML manifest, change the `API_ENDPOINT` environmental variable + to use the same origin URI `/api` no need for a scheme or host. + +Apply these three changes with `kubectl` and now the app will be temporarily unavailable. Note, if +you have changed namespace with `kubens` you should switch back to the **default** namespace before +running the apply. + +The next thing is to configure the ingress by [creating an _Ingress_ resource](https://kubernetes.io/docs/concepts/services-networking/ingress/). +This can be a fairly complex resource to set-up, but it boils down to a set of HTTP path mappings +(routes) and which backend service should serve them. +Here is the completed manifest file `ingress.yaml`: + +
+Click here for the Ingress YAML + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress + +metadata: + name: my-app + labels: + name: my-app + +spec: + # Important we leave this blank, as we don't have DNS configured + # Blank means these rules will match ALL HTTP requests hitting the controller IP + host: + # This is important and required since Kubernetes 1.22 + ingressClassName: nginx + rules: + - http: + paths: + # Routing for the frontend + - pathType: Prefix + path: "/" + backend: + service: + name: frontend + port: + number: 80 + + # Routing for the API + - pathType: Prefix + path: "/api" + backend: + service: + name: data-api + port: + number: 80 +``` + +
+
+ +Apply the same as before with `kubectl`, validate the status with: + +```bash +kubectl get ingress +``` + +It may take it a minute for it to be assigned an address, note the address will be the same as the external IP of the ingress-controller. +You can check this with: + +```sh +kubectl get svc -n ingress | grep LoadBalancer +``` + +Visit this IP in your browser, if you check the "About" screen and click the "More Details" link it +should take you to the API, which should now be served from the same IP as the frontend. + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +We've reached the final state of the application deployment. The resources deployed into the cluster +& in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +This is a slightly simplified version from previously, and the _Deployment_ objects are not shown. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../07-improvements/readme.md) β€– [Next Section ⏩](../09-extra-advanced/readme.md) diff --git a/kube-developper-workshop/09-extra-advanced/mongo-statefulset.yaml b/kube-developper-workshop/09-extra-advanced/mongo-statefulset.yaml new file mode 100644 index 00000000..3f463156 --- /dev/null +++ b/kube-developper-workshop/09-extra-advanced/mongo-statefulset.yaml @@ -0,0 +1,63 @@ +kind: StatefulSet +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + serviceName: mongodb + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password + + volumeMounts: + - name: mongo-data + mountPath: /data/db + + volumeClaimTemplates: + - metadata: + name: mongo-data + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: default + resources: + requests: + storage: 500M diff --git a/kube-developper-workshop/09-extra-advanced/readme.md b/kube-developper-workshop/09-extra-advanced/readme.md new file mode 100644 index 00000000..ac8ba39e --- /dev/null +++ b/kube-developper-workshop/09-extra-advanced/readme.md @@ -0,0 +1,283 @@ +# 🀯 Scaling, Stateful Workloads & Helm + +This final section touches on some slightly more advanced and optional concepts we've skipped over. +They aren't required to get a basic app up & running, but generally come up in practice and real +world use of Kubernetes. + +Feel free to do as much or as little of this section as you wish. + +## πŸ“ˆ Scaling + +Scaling is a very common topic and is always required in some form to meet business demand, handle +peak load and maintain application performance. There's fundamentally two approaches: manually scaling +and using dynamic auto-scaling. Along side that there are two dimensions to consider: + +- **Horizontal scaling**: This is scaling the number of application _Pods_, within the limits of the + resources available in the cluster. +- **Vertical or cluster scaling**: This is scaling the number of _Nodes_ in the cluster, and therefore + the total resources available. We won't be looking at this here, but you can [read the docs](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler) + if you want to know more. + +Scaling stateless applications manually can be as simple as running the command to update the number +of replicas in a _Deployment_, for example: + +```bash +kubectl scale deployment data-api --replicas 4 +``` + +Naturally this can also be done by updating the `replicas` field in the _Deployment_ manifest and +applying it. + +πŸ§ͺ **Experiment**: Try scaling the data API to a large number of pods e.g. 50 or 60 to see what happens? +If some of the _Pods_ remain in a "Pending" state can you find out the reason why? What effect does +changing the resource requests (for example increasing the memory to 600Mi) have on this? + +## 🚦 Autoscaling + +Horizontal auto scaling is performed with the _Horizontal Pod Autoscaler_ which you can [read about here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). +In essence it watches metrics emitted from the pods and other resources, and based on thresholds you +set, it will modify the number of replicas dynamically. + +To set up an _Horizontal Pod Autoscaler_ you can give it a deployment and some simple targets, as +follows: + +```bash +kubectl autoscale deployment data-api --cpu-percent=50 --min=2 --max=10 +``` + +
+This command is equivalent to deploying this HorizontalPodAutoscaler resource + +```yaml +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v1 +metadata: + name: data-api +spec: + maxReplicas: 10 + minReplicas: 2 + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: data-api + targetCPUUtilizationPercentage: 50 +``` + +
+ +Run this in a separate terminal window to watch the status and number of pods: + +```bash +watch -n 3 kubectl get pods +``` + +Now generate some fake load by hitting the `/api/info` endpoint with lots of requests. We use a tool +called `hey` to do this easily and run 20 concurrent requests for 3 minutes + +```bash +wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64 +chmod +x hey_linux_amd64 +./hey_linux_amd64 -z 180s -c 20 http://{INGRESS_IP}/api/info +``` + +After about 1~2 mins you should see new data-api pods being created. Once the `hey` command completes +and the load stops, it will probably be around ~5 mins before the pods scale back down to their +original number. + +## πŸ›’οΈ Improving The MongoDB Backend + +There's two very major problems with our backend database: + +- There's only a single instance, i.e. one Pod, introducing a serious single point of failure. +- The data held by MongoDB is ephemeral and if the Pod was terminated for any reason, we'd lose all + application data. Not very good! + +πŸ›‘ **IMPORTANT NOTE**: As a rule it's a bad idea and an "anti-pattern" to run stateful services in +Kubernetes. Managing them is complex and time consuming. It's **strongly recommended** use PaaS data +offerings which reside outside your cluster and can be managed independently and easily. We will +continue to keep MongoDB running in the cluster purely as a learning exercise. + +We can’t simply horizontally scale out the MongoDB _Deployment_ with multiple _Pod_ replicas as it +is stateful, i.e. it holds data and state. We'd create a ["split brain" situation](https://www.45drives.com/community/articles/what-is-split-brain/) +as requests are routed to different Pods. + +Kubernetes does provide a [feature](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) +called _StatefulSets_ which greatly helps with the complexities of running multiple stateful services +across in a cluster. + +⚠️ HOWEVER! _StatefulSets_ are not a magic wand - any stateful workload such as a database (e.g. MongoDB), +**still needs to be made aware** it is running in multiple places and handle the data +synchronization/replication. This can be setup for MongoDB, but is deemed too complex for this +workshop. + +However we can address the issue of data persistence. + +πŸ§ͺ **Optional Experiment**: Try using the app and adding an event using the "Admin" screens, then +run `kubectl delete pod {mongo-pod-name}` You will see that Kubernetes immediately restarts it. +However when the app recovers and reconnects to the DB, you will see the data you created is gone. + +To resolve the data persistence issues, we need do three things: + +- Change the MongoDB _Deployment_ to a _StatefulSet_ with a single replica. +- Add a `volumeMount` to the container mapped to the `/data/db` filesystem, which is where the + mongodb process stores its data. +- Add a `volumeClaimTemplate` to dynamically create a _PersistentVolume_ and a _PersistentVolumeClaim_ + for this _StatefulSet_. Use the "default" _StorageClass_ and request a 500M volume which is dedicated + with the "ReadWriteOnce" access mode. + +The relationships between these in AKS and Azure, can be explained with a diagram: + +![persistent volume claims](https://docs.microsoft.com/azure/aks/media/concepts-storage/persistent-volume-claims.png) + +_PersistentVolumes_, _PersistentVolumeClaims_, _StorageClasses_, etc. are a deep and complex topics +in Kubernetes, if you want begin reading about them there are masses of information in +[the docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). However it is suggested +for now simply take the YAML below: + +
+Completed MongoDB StatefulSet YAML manifest + +```yaml +kind: StatefulSet +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + serviceName: mongodb + replicas: 1 # Important we leave this as 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password + + volumeMounts: + - name: mongo-data + mountPath: /data/db + + volumeClaimTemplates: + - metadata: + name: mongo-data + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: default + resources: + requests: + storage: 500M +``` + +
+ +Save as `mongo-statefulset.yaml` remove the old deployment with `kubectl delete deployment mongodb` +and apply the new `mongo-statefulset.yaml` file. Some comments: + +- When you run `kubectl get pods` you will see the pod name ends `-0` rather than the random hash. +- Running `kubectl get pv,pvc` you will see the new _PersistentVolume_ and _PersistentVolumeClaim_ + that have been created. The _Pod_ might take a little while to start while the volume is created, + and is "bound" to the _Pod_ + +If you repeat the experiment above, you should see that the data is maintained after you delete the +`mongodb-0` pod and it restarts. + +## πŸ’₯ Installing The App with Helm + +The Smilr app we have been working with, comes with a Helm chart, which you can take a look at here, +[Smilr Helm Chart](https://github.com/benc-uk/smilr/tree/master/kubernetes/helm/smilr). + +With this we can deploy the entire app, all the deployments, pods, services, ingress, etc. with a single +command. Naturally if we were to have done this from the beginning there wouldn't have been much scope +for learning! + +However as this is the final section, now might be a good time to try it. Due to some limitations +(mainly the lack of public DNS), only one deployment of the app can function at any given time. So you +will need to remove what have currently deployed, by running: + +```bash +kubectl delete deploy,sts,svc,ingress --all +``` + +Fetch the chart and download it locally, this is because the chart isn't published in a Helm repo: + +```bash +curl -sL https://github.com/benc-uk/smilr/releases/download/2.9.8a/smilr-chart.tar.gz | tar -zx +``` + +Create a values file for your release: + +```yaml +registryPrefix: {ACR_NAME}.azurecr.io/ + +ingress: + className: nginx + +dataApi: + imageTag: stable + replicas: 2 + +frontend: + imageTag: stable + replicas: 1 + +mongodb: + enabled: true +``` + +Save it as `my-values.yaml`, then run a command to tell Helm to fetch any dependencies. In this case +the Smilr chart uses the [Bitnami MongoDB chart](https://github.com/bitnami/charts/tree/master/bitnami/mongodb). +To fetch/update this simply run: + +```bash +helm dependency update ./smilr +``` + +Finally we are ready to deploy the Smilr app using Helm, the release name can be anything you wish, +and you should point to the local directory where the chart has been downloaded and extracted: + +```bash +helm install myapp ./smilr --values my-values.yaml +``` + +Validate the deployment as before with `helm` and `kubectl` and check you can access the app in the +browser. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../08-helm-ingress/readme.md) β€– [Next Section ⏩](../10-gitops-flux/readme.md) diff --git a/kube-developper-workshop/10-gitops-flux/base/deployment.yaml b/kube-developper-workshop/10-gitops-flux/base/deployment.yaml new file mode 100644 index 00000000..a90bb7e5 --- /dev/null +++ b/kube-developper-workshop/10-gitops-flux/base/deployment.yaml @@ -0,0 +1,22 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver +spec: + selector: + matchLabels: + app: webserver + template: + metadata: + labels: + app: webserver + spec: + containers: + - name: webserver + image: nginx + resources: + limits: + memory: "128Mi" + cpu: "500m" + ports: + - containerPort: 80 diff --git a/kube-developper-workshop/10-gitops-flux/base/kustomization.yaml b/kube-developper-workshop/10-gitops-flux/base/kustomization.yaml new file mode 100644 index 00000000..9c2d28b0 --- /dev/null +++ b/kube-developper-workshop/10-gitops-flux/base/kustomization.yaml @@ -0,0 +1,4 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - deployment.yaml diff --git a/kube-developper-workshop/10-gitops-flux/gitops.png b/kube-developper-workshop/10-gitops-flux/gitops.png new file mode 100644 index 00000000..d919ba84 Binary files /dev/null and b/kube-developper-workshop/10-gitops-flux/gitops.png differ diff --git a/kube-developper-workshop/10-gitops-flux/overlay/kustomization.yaml b/kube-developper-workshop/10-gitops-flux/overlay/kustomization.yaml new file mode 100644 index 00000000..24a27535 --- /dev/null +++ b/kube-developper-workshop/10-gitops-flux/overlay/kustomization.yaml @@ -0,0 +1,18 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +# Reference to a base kustomization directory +resources: + - ../base + +# You can add suffixes and prefixes +nameSuffix: -dev + +# Modify the image name or tags +images: + - name: nginx + newTag: 1.21-alpine + +# Apply patches to override and set other values +patches: + - ./override.yaml diff --git a/kube-developper-workshop/10-gitops-flux/overlay/override.yaml b/kube-developper-workshop/10-gitops-flux/overlay/override.yaml new file mode 100644 index 00000000..2ddb2376 --- /dev/null +++ b/kube-developper-workshop/10-gitops-flux/overlay/override.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver + +spec: + template: + spec: + containers: + - name: webserver + resources: + limits: + cpu: 330m + env: + - name: SOME_ENV_VAR + value: Hello! diff --git a/kube-developper-workshop/10-gitops-flux/readme.md b/kube-developper-workshop/10-gitops-flux/readme.md new file mode 100644 index 00000000..febe99d2 --- /dev/null +++ b/kube-developper-workshop/10-gitops-flux/readme.md @@ -0,0 +1,314 @@ +# 🧬 GitOps & Flux + +This is an advanced optional section going into two topics; Kustomize and also GitOps, using FluxCD. + +## πŸͺ“ Kustomize + +Kustomize is a tool for customizing Kubernetes configurations. + +Kustomize traverses Kubernetes manifests to add, remove or update configuration options. +It is available both as a [standalone binary](https://kubectl.docs.kubernetes.io/installation/kustomize/) and as a native feature of kubectl. It can be thought of as similar to Helm where it provides a means to template and parameterize Kubernetes manifests. + +Kustomize works by looking for `kustomization.yaml` files and operating on their contents. + +[These slides](https://speakerdeck.com/spesnova/introduction-to-kustomize) provide a fairly good introduction. + +To demonstrate Kustomize in practice, we can carry out a simple exercise, create a new directory called `base`. + +Place the the following two files into it: + +
+Contents of base/deployment.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver +spec: + selector: + matchLabels: + app: webserver + template: + metadata: + labels: + app: webserver + spec: + containers: + - name: webserver + image: nginx + resources: + limits: + memory: "128Mi" + cpu: "500m" + ports: + - containerPort: 80 +``` + +
+ +
+Contents of base/kustomization.yaml + +```yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - deployment.yaml +``` + +
+ +Now run kustomize via kubectl, giving it the path to the base directory as follows: + +```bash +kubectl kustomize ./base +``` + +You will see the YAML printed to stdout, as we've not provided any changes in the `kustomization.yaml` +all we get is a 1:1 version of the `deployment.yaml` file. This isn't very useful! + +To better understand what Kustomize can do, create a second directory at the same level as `base` +called `overlay`. + +
+Contents of overlay/override.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver + +spec: + template: + spec: + containers: + - name: webserver + resources: + limits: + cpu: 330m + env: + - name: SOME_ENV_VAR + value: Hello! +``` + +
+ +
+Contents of overlay/kustomization.yaml + +```yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +# Reference to a base kustomization directory +resources: + - ../base + +# You can add suffixes and prefixes +nameSuffix: -dev + +# Modify the image name or tags +images: + - name: nginx + newTag: 1.21-alpine + +# Apply patches to override and set other values +patches: + - ./override.yaml +``` + +
+ +Some points to highlight: + +- The _Kustomization_ adds a suffix to the names of resources. +- Also the _Kustomization_ changes the image tag to reference a specific tag. +- The patch `override.yaml` file looks a little like a regular Kubernetes _Deployment_ but it only + contains the part that will be patched/overlayed onto the base resource. On its own it's not a + valid manifest. + - The patch file sets fields in the base _Deployment_ such as changing the resource limits and + adding an extra environmental variable. + +See the [reference docs](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) for +all the options available in the kustomization.yaml file. + +The file & directory structure should look as follows: + +```text +β”œβ”€β”€ base +β”‚ β”œβ”€β”€ deployment.yaml +β”‚ └── kustomization.yaml +└── overlay + β”œβ”€β”€ kustomization.yaml + └── override.yaml +``` + +> πŸ“ NOTE: The names "base" and "overlay" are not special, often "environments" is used instead of +> "overlay", with sub-directories for each environment. + +Now running: + +```bash +kubectl kustomize ./overlay +``` + +You will now see the overrides and modifications from the overlay applied to the base resources. With +the modified nginx image tag, different resource limits and additional env var. + +This could be applied to the cluster with the following command `kubectl -k ./overlay apply`, however +there is no need to do this. + +## GitOps & Flux + +GitOps is a methodology where you declaratively describe the entire desired state of your system using +git. This includes the apps, config, dashboards, monitoring and everything else. This means you can +use git branches and PR processes to enforce control of releases and provide traceability and +transparency. + +![gitops](./gitops.png) + +Kubernetes doesn't support this concept out of the box, it requires special controllers to be deployed +and manage this process. These controllers run inside the cluster, monitor git repositories for changes +and then make the required updates to the state of the cluster, through a process called reconciliation. + +We will use the [popular project FluxCD](https://fluxcd.io/) (also just called Flux or Flux v2), however +other projects are available such as ArgoCD and support from GitLab. + +As GitOps is a "pull" vs "push" approach, it also allows you to lock down your Kubernetes cluster, and +prevent developers and admins making direct changes with kubectl. + +> πŸ“ NOTE: GitOps is a methodology and an approach, it is not the name of a product. + +### πŸ’½ Install Flux into AKS + +[Flux is available as an AKS Extension](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/tutorial-use-gitops-flux2) +which is intended to simplify installing Flux into your cluster & configuring it. As of Jan 2022, it +requires some extensions to the Azure CLI to be installed first. + +Add the CLI extensions with: + +```bash +az extension add -n k8s-configuration +az extension add -n k8s-extension +``` + +It also requires some [preview providers](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/tutorial-use-gitops-flux2#for-azure-kubernetes-service-clusters) +to be enabled on your Azure subscription. Follow out these steps before proceeding, which can take +some time! + +Before we configure anything GitOps needs a git repo to work against. We'll use a fork of this repo, +to set this up: + +- Go to the repo for this workshop [https://github.com/benc-uk/](https://github.com/benc-uk/). +- Fork the repo to your own personal GitHub account, by clicking the 'Fork' button near the top right. + +Now to set up Flux, run the following command, replacing the `{YOUR_GITHUB_USER}` part with your +GitHub username you used for the fork: + +```bash +az k8s-configuration flux create \ + --resource-group ${RES_GROUP} --cluster-name ${AKS_NAME} \ + --name flux --namespace flux-system --cluster-type managedClusters --scope cluster \ + --url https://github.com/{YOUR_GITHUB_USER}/kube-workshop --branch main --interval 1m \ + --kustomization name=apps path=gitops/apps prune=true sync_interval=1m +``` + +This one command is doing a LOT of things, it's adding an extension to AKS, deploying Flux to the +cluster (with all the Pods and CRDs) and it's adding the _GitRepo_ to be scanned and checked. It will +take a few minutes to complete, be patient! + +Check the status of Flux with the following commands: + +```bash +kubectl get kustomizations -A + +kubectl get gitrepo -A + +kubectl get pod -n flux-system +``` + +You should also see a new namespace called "hello-world", check with `kubectl get ns` this has been +created by the `gitops/apps/hello-world.yaml` file in the repo and automatically applied by Flux. + +You can also view this configuration from the Azure portal, under the "GitOps" view under the AKS +cluster resource. + +### πŸš€ Deploying Resources + +Clone the kube-workshop repo you forked earlier and open the directory in VS Code or other editor. + +If you recall from the bootstrap command earlier we gave Flux a path within the repo to use and look +for configurations, which was `gitops/apps` directory. The contents of the whole of the `gitops` +directory is shown here. + +```text +gitops + β”œβ”€β”€ apps + β”‚ └── hello-world.yaml + β”œβ”€β”€ base + β”‚ β”œβ”€β”€ data-api + β”‚ β”‚ β”œβ”€β”€ deployment.yaml + β”‚ β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ β”‚ └── service.yaml + β”‚ β”œβ”€β”€ frontend + β”‚ β”‚ β”œβ”€β”€ deployment.yaml + β”‚ β”‚ β”œβ”€β”€ ingress.yaml + β”‚ β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ β”‚ └── service.yaml + β”‚ └── mongodb + β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ └── mongo-statefulset.yaml + └── disabled + β”œβ”€β”€ mongodb + β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ └── overrides.yaml + └── smilr + └── kustomization.yaml +``` + +The base directory provides us a library of Kustomization-based resources we can use, but as it's +outside of the `gitops/apps` path they will not be picked up by Flux. + +⚠️ **STOP!** Before we proceed, ensure the `mongo-creds` _Secret_ from the previous sections is still +in the default namespace. If you have deleted it, [hop back to section 7](../07-improvements/readme.md) +and quickly create it again. It's just a single command. Creating _Secrets_ using the GitOps approach +is problematic, as they need to be committed into a code repo. Flux supports solutions to this, such +as using [SOPS](https://fluxcd.io/docs/guides/mozilla-sops/) and +[Sealed Secrets](https://fluxcd.io/docs/guides/sealed-secrets/) but for an intro such as this, they +require too much extra setup, so we will skip over them. + +First let's deploy MongoDB using Flux: + +- Copy the `monogodb/` directory from "disabled" to "apps". + - Note the `kustomization.yaml` in here is pointing at the base directory `../../base/mongodb` and + overlaying it. +- Git commit these changes to the main branch and push up to GitHub. +- Wait for ~1 minute for Flux to rescan the git repo. +- Check for any errors with `kubectl get kustomizations -A`. +- Check the default namespace for the new MongoDB StatefulSet and Pod using + `kubectl get sts,pods -n default`. + +Next deploy the Smilr app: + +- Copy the `smilr/` directory from "disabled" to "apps". + - Note the `kustomization.yaml` in here is pointing at **several** base directories, for the app + data-api and frontend. +- Edit the ACR name in the `gitops/apps/smilr/kustomization.yaml` file. +- Git commit these changes to the main branch and push up to GitHub. +- Wait for ~1 minute for Flux to rescan the git repo. +- Check for any errors with `kubectl get kustomizations -A`. +- Check the default namespace for the new resources using `kubectl get deploy,pods,ingress -n default`. + +If you encounter problems or want to force the reconciliation you can use the `flux` CLI, e.g. +`flux reconcile source git flux-system`. + +If we wanted to deploy this app across multiple environments or multiple times, we could create +sub-directories under `apps/`, each containing different Kustomizations and modifying the deployment +to suit that environment. + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../09-extra-advanced/readme.md) β€– [Next Section ⏩](../11-cicd-actions/readme.md) diff --git a/kube-developper-workshop/11-cicd-actions/build-release.yaml b/kube-developper-workshop/11-cicd-actions/build-release.yaml new file mode 100644 index 00000000..a8323af0 --- /dev/null +++ b/kube-developper-workshop/11-cicd-actions/build-release.yaml @@ -0,0 +1,76 @@ +name: CI Build & Release + +on: + workflow_dispatch: + push: + branches: ["main"] + +env: + ACR_NAME: __YOUR_ACR_NAME__ + IMAGE_TAG: ${{ github.run_id }} + +jobs: + buildJob: + name: "Build & push images" + runs-on: ubuntu-latest + steps: + - name: "Checkout app code repo" + uses: actions/checkout@v2 + with: + repository: benc-uk/smilr + + - name: "Authenticate to access ACR" + uses: docker/login-action@master + with: + registry: ${{ env.ACR_NAME }}.azurecr.io + username: ${{ env.ACR_NAME }} + password: ${{ secrets.ACR_PASSWORD }} + + - name: "Build & Push: data API" + run: | + docker buildx build . -f node/data-api/Dockerfile \ + -t $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG \ + -t $ACR_NAME.azurecr.io/smilr/data-api:latest + docker push $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG + + - name: "Build & Push: frontend" + run: | + docker buildx build . -f node/frontend/Dockerfile \ + -t $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG \ + -t $ACR_NAME.azurecr.io/smilr/frontend:latest + docker push $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG + + releaseJob: + name: "Release to Kubernetes" + runs-on: ubuntu-latest + # Only release from main + if: ${{ github.ref == 'refs/heads/main' }} + needs: buildJob + environment: + name: workshop-environment + url: http://__PUBLIC_IP_OF_CLUSTER__/ + steps: + - name: "Configure kubeconfig" + uses: azure/k8s-set-context@v2 + with: + method: kubeconfig + kubeconfig: ${{ secrets.CLUSTER_KUBECONFIG }} + + - name: "Sanity check Kubernetes" + run: kubectl get nodes + + - name: "Checkout app code repo" # Needed for the Helm chart + uses: actions/checkout@v2 + with: + repository: benc-uk/smilr + + - name: "Update chart dependencies" + run: helm dependency update ./kubernetes/helm/smilr + + - name: "Release app with Helm" + run: | + helm upgrade myapp ./kubernetes/helm/smilr --install --wait --timeout 120s \ + --set registryPrefix=$ACR_NAME.azurecr.io/ \ + --set frontend.imageTag=$IMAGE_TAG \ + --set dataApi.imageTag=$IMAGE_TAG \ + --set mongodb.enabled=true diff --git a/kube-developper-workshop/11-cicd-actions/readme.md b/kube-developper-workshop/11-cicd-actions/readme.md new file mode 100644 index 00000000..073e2530 --- /dev/null +++ b/kube-developper-workshop/11-cicd-actions/readme.md @@ -0,0 +1,275 @@ +# πŸ‘· CI/CD with Kubernetes + +This is an optional section detailing how to set up a continuous integration (CI) and continuous +deployment (CD) pipeline, which will deploy to Kubernetes using Helm. + +There are many CI/CD solutions available, we will use GitHub Actions, as it's easy to set up and most +developers will already have GitHub accounts. It assumes familiarity with git and basic GitHub usage +such as forking & cloning. + +> πŸ“ NOTE: This is not intended to be full guide or tutorial on GitHub Actions, you would be better +> off starting [here](https://docs.github.com/en/actions/learn-github-actions) or +> [here](https://docs.microsoft.com/en-us/learn/paths/automate-workflow-github-actions/?source=learn). + +## 🚩 Get Started with GitHub Actions + +We'll use a fork of this repo in order to set things up, but in principle you could also start with +an new/empty repo on GitHub. + +- Go to the repo for this workshop [https://github.com/benc-uk/kube-workshop](https://github.com/benc-uk/kube-workshop). +- Fork the repo to your own personal GitHub account, by clicking the 'Fork' button near the top right. +- Clone the forked repo from GitHub using git to your local machine. + +Inside the `.github/workflows` directory, create a new file called `build-release.yaml` and paste in +the contents: + +> πŸ“ NOTE: This is special directory path used by GitHub Actions! + +```yaml +# Name of the workflow +name: CI Build & Release + +# Triggers for running +on: + workflow_dispatch: # This allows manually running from GitHub web UI + push: + branches: ["main"] # Standard CI trigger when main branch is pushed + +# One job for building the app +jobs: + buildJob: + name: "Build & push images" + runs-on: ubuntu-latest + steps: + # Checkout code from another repo on GitHub + - name: "Checkout app code repo" + uses: actions/checkout@v2 + with: + repository: benc-uk/smilr +``` + +The comments in the YAML should hopefully explain what is happening. But in summary this will run a +short single step job that just checks out the code of the Smilr app repo. The name and filename do +not reflect the current function, but the intent of what we are building towards. + +Now commit the changes and push to the main branch, yes this is not a typical way of working, but +adding a code review or PR process would merely distract from what we are doing. + +The best place to check the status is from the GitHub web site and in the 'Actions' within your +forked repo, e.g. `https://github.com/{your-github-user}/kube-workshop/actions` you should be able +to look at the workflow run, the status, plus output & other details. + +> πŸ“ NOTE: It's unusual for the code you are building to be a in separate repo from the workflow(s), +> in most cases they will be in the same code base, however it doesn't really make any difference to +> the approach we will take. + +## ⌨️ Set Up GitHub CLI + +Install the GitHub CLI, this will make setting up the secrets required in the next part much more simple. +All commands below assume you are running them from within the path of the cloned repo on your local +machine. + +- On MacOS: [https://github.com/cli/cli#macos](https://github.com/cli/cli#macos) +- On Ubuntu/WSL: `curl -s https://raw.githubusercontent.com/benc-uk/tools-install/master/gh.sh | bash` + +Now login using the GitHub CLI, follow the authentication steps when prompted: + +```bash +gh auth login +``` + +Once the CLI is set up it, we can use it to create a [secret](https://docs.github.com/en/actions/security-guides/encrypted-secrets) +within your repo, called `ACR_PASSWORD`. We'll reference this secret in the next section. This combines +the Azure CLI and GitHub CLI into one neat way to get the credentials: + +```bash +gh secret set ACR_PASSWORD --body "$(az acr credential show --name $ACR_NAME --query "passwords[0].value" -o tsv)" +``` + +## πŸ“¦ Add CI Steps For Image Building + +The workflow, doesn't really do much, the applicaiton gets built and images created but they go nowhere. +So let's update the workflow YAML to carry out a build and push of the application container images. +We can do this using the code we've checked out in the previous workflow step. + +Add this as the YAML top level, e.g just under the `on:` section, change the `__YOUR_ACR_NAME__` +string to the name of the ACR you deployed previously (do not include the azurecr.io part). + +```yaml +env: + ACR_NAME: __YOUR_ACR_NAME__ + IMAGE_TAG: ${{ github.run_id }} +``` + +Add this section under the "Checkout app code repo" step in the job, it will require indenting to the +correct level: + +```yaml + - name: "Authenticate to access ACR" + uses: docker/login-action@master + with: + registry: ${{ env.ACR_NAME }}.azurecr.io + username: ${{ env.ACR_NAME }} + password: ${{ secrets.ACR_PASSWORD }} + + - name: "Build & Push: data API" + run: | + docker buildx build . -f node/data-api/Dockerfile \ + -t $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG \ + -t $ACR_NAME.azurecr.io/smilr/data-api:latest + docker push $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG + + - name: "Build & Push: frontend" + run: | + docker buildx build . -f node/frontend/Dockerfile \ + -t $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG \ + -t $ACR_NAME.azurecr.io/smilr/frontend:latest + docker push $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG +``` + +Save the file, commit and push to main just as before. Then check the status from the GitHub UI and +'Actions' page of your forked repo. + +The workflow now does three important things: + +- Authenticate to "login" to the ACR. +- Build the **smilr/data-api** image and tag as `latest` and also the GitHub run ID, which is unique + to every run of the workflow. Then push these images to the ACR. +- Do exactly the same for the **smilr/frontend** image. + +The "Build & push images" job and the workflow should take around 2~3 minutes to complete. + +## πŸ”Œ Connect To Kubernetes + +We'll be using an approach of "pushing" changes from the workflow pipeline to the cluster, really +exactly the same as we have been doing from our local machines with `kubectl` and `helm` commands. + +To do this we need a way to authenticate, so we'll use another GitHub secret and store the cluster +credentials in it. + +There's a very neat 'one liner' command you can run to do this. It's taking the output of the +`az aks get-credentials` command we ran previously and storing the result as a secret called +`CLUSTER_KUBECONFIG`. Run the following: + +```bash +gh secret set CLUSTER_KUBECONFIG --body "$(az aks get-credentials --name $AKS_NAME --resource-group $RES_GROUP --file -)" +``` + +Next add a second job called `releaseJob` to the workflow YAML, beware the indentation, +this should under the `jobs:` key + +```yaml + releaseJob: + name: "Release to Kubernetes" + runs-on: ubuntu-latest + if: ${{ github.ref == 'refs/heads/main' }} + needs: buildJob + + steps: + - name: "Configure kubeconfig" + uses: azure/k8s-set-context@v2 + with: + method: kubeconfig + kubeconfig: ${{ secrets.CLUSTER_KUBECONFIG }} + + - name: "Sanity check Kubernetes" + run: kubectl get nodes +``` + +This is doing a bunch of things so lets step through it: + +- This second job has a dependency on the previous build job, obviously we don't want to run a + release & deployment if the build has failed or hasn't finished! +- This job will only run if the code is in the `main` branch, which means we won't run deployments + on pull requests, this is a common practice. +- It uses the `azure/k8s-set-context` action and the `CLUSTER_KUBECONFIG` secret to + authenticate and point to our AKS cluster. +- We run a simple `kubectl` command to sanity check we are connected ok. + +Save the file, commit and push to main just as before, and check the status using the GitHub +actions page. + +## πŸͺ– Deploy using Helm + +Nearly there! Now we want to run `helm` in order to deploy the Smilr app into the cluster, but also +make sure it deploys from the images we just built and pushed. There's two ways for Helm to access +a chart, either using the local filesystem or a remote chart published to a chart repo. We'll be +using the first approach. The Smilr git repo contains a Helm chart for us to use, we'll check it out +and then run `helm` to release the chart. + +Add the following two steps to the releaseJob (beware indentation again!) + +```yaml + - name: "Checkout app code repo" # Needed for the Helm chart + uses: actions/checkout@v2 + with: + repository: benc-uk/smilr + + - name: "Update chart dependencies" + run: helm dependency update ./kubernetes/helm/smilr +``` + +You can save, commit and push at this point to run the workflow and check everything is OK, or push +onto the next step. + +Add one final step to the releaseJob, which runs the `helm upgrade` command to create or update a release. See the [full docs on this command](https://helm.sh/docs/helm/helm_upgrade/) + +```yaml + - name: "Release app with Helm" + run: | + helm upgrade myapp ./kubernetes/helm/smilr --install --wait --timeout 120s \ + --set registryPrefix=$ACR_NAME.azurecr.io/ \ + --set frontend.imageTag=$IMAGE_TAG \ + --set dataApi.imageTag=$IMAGE_TAG \ + --set mongodb.enabled=true +``` + +This command is doing an awful lot, so let's break it down: + +- `helm upgrade` tells Helm to upgrade an existing release, as we also pass `--install` this means + Helm will install it first if it doesn't exist. Think of it as create+update, or an "upsert" + operation. +- The release name is `myapp` but could be anything you wish, it will be used to prefix all the + resources in Kubernetes. +- The chart is referenced by filesystem path `./kubernetes/helm/smilr` which is why we checked out + the Smilr git repo before this step. The GitHub link to that directory + [is here of you are curious](https://github.com/benc-uk/smilr/tree/master/kubernetes/helm/smilr) +- The `--set` flags pass parameters into the chart for this release, which are the ACR name, plus + the image tags we just built. These are available as variables in our workflow `$ACR_NAME` and + `$IMAGE_TAG` +- The `--wait --timeout 120s` flags tell Helm to wait 2 minutes for the Kubernetes pods to start + +Phew! As you can see Helm is a powerful way to deploy apps to Kubernetes, sometimes with a single +command + +Once again save, commit and push, then check the status of the workflow. It's very likely you made +a mistake, keep committing & pushing to fix and re-run the workflow until it completes and runs +green. + +You can validate the deployment with the usual `kubectl get pods` command and `helm ls` to view +the Helm release. Hopefully all the pods should be running. + +## πŸ… Bonus - Environments + +GitHub has the concept of [environments](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment), which are an abstraction representing a target set of +resources or a deployed application. This lets you use the GitHub UI to see the status of deployments +targeting that environment, and even give users a link to access it + +We can add an environment simply by adding the follow bit of YAML under the releaseJob job: + +```yaml + environment: + name: workshop-environment + url: http://__PUBLIC_IP_OF_CLUSTER__/ +``` + +Tip. The `environment` part needs to line up with the `needs` and `if` parts in the job YAML. + +The `name` can be anything you wish and the URL needs to point to the public IP address of your +cluster which you were referencing earlier, if you've forgotten it try running +`kubectl get svc -A | grep LoadBalancer | awk '{print $5}'` + +## Navigation + +[Return to Main Index 🏠](../readme.md) β€– +[Previous Section βͺ](../10-gitops-flux/readme.md) diff --git a/kube-developper-workshop/etc/Workshop Diagrams.pptx b/kube-developper-workshop/etc/Workshop Diagrams.pptx new file mode 100644 index 00000000..c276c67f Binary files /dev/null and b/kube-developper-workshop/etc/Workshop Diagrams.pptx differ diff --git a/kube-developper-workshop/etc/alternative.md b/kube-developper-workshop/etc/alternative.md new file mode 100644 index 00000000..81f7ff7e --- /dev/null +++ b/kube-developper-workshop/etc/alternative.md @@ -0,0 +1,45 @@ +# A placeholder section + +## πŸ’Ύ Install Tools + +```bash +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +sudo mv ./kubectl /usr/bin/kubectl +``` + +```bash +curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash +``` + +### πŸ₯Ύ Bootstrap Flux Into Cluster + +[Generate a GitHub personal access token](https://github.com/settings/tokens) (PAT) that can create repositories by checking all permissions under "repo", copy the token and set it into an environmental variable called `GITHUB_TOKEN` + +```bash +export GITHUB_TOKEN={NEW_TOKEN_VALUE} +``` + +Now fork this repo [github.com/benc-uk/kube-workshop](https://github.com/benc-uk/kube-workshop) to your own GitHub personal account. + +Run the Flux bootstrap which should point to your fork by setting the owner parameter to your GitHub username: + +```bash +flux bootstrap github \ + --owner=__CHANGE_ME__ \ + --repository=kube-workshop \ + --path=gitops/apps \ + --branch=main \ + --personal +``` + +Check the status of Flux with the following commands: + +```bash +kubectl get kustomizations -A + +kubectl get gitrepo -A + +kubectl get pod -n flux-system +``` + +You should also see a new namespace called "hello-world", check with `kubectl get ns` this has been created by the `gitops/apps/hello-world.yaml` file in the repo and automatically applied by Flux diff --git a/kube-developper-workshop/gitops/apps/hello-world.yaml b/kube-developper-workshop/gitops/apps/hello-world.yaml new file mode 100644 index 00000000..34b23cfa --- /dev/null +++ b/kube-developper-workshop/gitops/apps/hello-world.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: hello-world diff --git a/kube-developper-workshop/gitops/base/data-api/deployment.yaml b/kube-developper-workshop/gitops/base/data-api/deployment.yaml new file mode 100644 index 00000000..0b48ed03 --- /dev/null +++ b/kube-developper-workshop/gitops/base/data-api/deployment.yaml @@ -0,0 +1,46 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: data-api + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + port: 4000 + path: /api/health + initialDelaySeconds: 0 + periodSeconds: 5 + + env: + - name: MONGO_CONNSTR + valueFrom: + secretKeyRef: + name: mongo-creds + key: connection-string diff --git a/kube-developper-workshop/gitops/base/data-api/kustomization.yaml b/kube-developper-workshop/gitops/base/data-api/kustomization.yaml new file mode 100644 index 00000000..0409e96c --- /dev/null +++ b/kube-developper-workshop/gitops/base/data-api/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +namespace: default +resources: + - deployment.yaml + - service.yaml diff --git a/kube-developper-workshop/gitops/base/data-api/service.yaml b/kube-developper-workshop/gitops/base/data-api/service.yaml new file mode 100644 index 00000000..7b14e241 --- /dev/null +++ b/kube-developper-workshop/gitops/base/data-api/service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: ClusterIP + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 diff --git a/kube-developper-workshop/gitops/base/frontend/deployment.yaml b/kube-developper-workshop/gitops/base/frontend/deployment.yaml new file mode 100644 index 00000000..efc1c25e --- /dev/null +++ b/kube-developper-workshop/gitops/base/frontend/deployment.yaml @@ -0,0 +1,43 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: frontend + imagePullPolicy: Always + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: /api diff --git a/kube-developper-workshop/gitops/base/frontend/ingress.yaml b/kube-developper-workshop/gitops/base/frontend/ingress.yaml new file mode 100644 index 00000000..58fb7426 --- /dev/null +++ b/kube-developper-workshop/gitops/base/frontend/ingress.yaml @@ -0,0 +1,31 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress + +metadata: + name: my-app + labels: + name: my-app + +spec: + # Important we leave this blank, as we don't have DNS configured + # Blank means these rules will match ALL HTTP requests hitting the controller IP + # This is important and required since Kubernetes 1.22 + ingressClassName: nginx + rules: + - http: + paths: + - pathType: Prefix + path: "/" + backend: + service: + name: frontend + port: + number: 80 + + - pathType: Prefix + path: "/api" + backend: + service: + name: data-api + port: + number: 80 diff --git a/kube-developper-workshop/gitops/base/frontend/kustomization.yaml b/kube-developper-workshop/gitops/base/frontend/kustomization.yaml new file mode 100644 index 00000000..46ccf22c --- /dev/null +++ b/kube-developper-workshop/gitops/base/frontend/kustomization.yaml @@ -0,0 +1,7 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +namespace: default +resources: + - deployment.yaml + - service.yaml + - ingress.yaml diff --git a/kube-developper-workshop/gitops/base/frontend/service.yaml b/kube-developper-workshop/gitops/base/frontend/service.yaml new file mode 100644 index 00000000..cb7e0dda --- /dev/null +++ b/kube-developper-workshop/gitops/base/frontend/service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: ClusterIP + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 diff --git a/kube-developper-workshop/gitops/base/mongodb/kustomization.yaml b/kube-developper-workshop/gitops/base/mongodb/kustomization.yaml new file mode 100644 index 00000000..b798d557 --- /dev/null +++ b/kube-developper-workshop/gitops/base/mongodb/kustomization.yaml @@ -0,0 +1,8 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: default + +resources: + - statefulset.yaml + - service.yaml diff --git a/kube-developper-workshop/gitops/base/mongodb/service.yaml b/kube-developper-workshop/gitops/base/mongodb/service.yaml new file mode 100644 index 00000000..139811d1 --- /dev/null +++ b/kube-developper-workshop/gitops/base/mongodb/service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: database + +spec: + type: ClusterIP + selector: + app: mongodb + ports: + - protocol: TCP + port: 27017 + targetPort: 27017 diff --git a/kube-developper-workshop/gitops/base/mongodb/statefulset.yaml b/kube-developper-workshop/gitops/base/mongodb/statefulset.yaml new file mode 100644 index 00000000..cb31eba9 --- /dev/null +++ b/kube-developper-workshop/gitops/base/mongodb/statefulset.yaml @@ -0,0 +1,53 @@ +kind: StatefulSet +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + serviceName: mongodb + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + image: notarealimage + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + volumeMounts: + - name: mongo-data + mountPath: /data/db + + volumeClaimTemplates: + - metadata: + name: mongo-data + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: default + resources: + requests: + storage: 500M diff --git a/kube-developper-workshop/gitops/disabled-k3s/mongodb/kustomization.yaml b/kube-developper-workshop/gitops/disabled-k3s/mongodb/kustomization.yaml new file mode 100644 index 00000000..358d7ede --- /dev/null +++ b/kube-developper-workshop/gitops/disabled-k3s/mongodb/kustomization.yaml @@ -0,0 +1,22 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - ../../base/mongodb + +images: + - name: notarealimage + newName: mongo:5.0 + +patches: + # patchesStrategicMerge + - path: overrides.yaml + + # patchesJson6902 + - target: + kind: StatefulSet + name: mongodb + patch: |- + - op: replace + path: /spec/volumeClaimTemplates/0/spec/storageClassName + value: local-path diff --git a/kube-developper-workshop/gitops/disabled-k3s/mongodb/overrides.yaml b/kube-developper-workshop/gitops/disabled-k3s/mongodb/overrides.yaml new file mode 100644 index 00000000..820234f2 --- /dev/null +++ b/kube-developper-workshop/gitops/disabled-k3s/mongodb/overrides.yaml @@ -0,0 +1,19 @@ +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: mongodb + namespace: default + +spec: + template: + spec: + containers: + - name: mongodb-container + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password diff --git a/kube-developper-workshop/gitops/disabled-k3s/smilr/kustomization.yaml b/kube-developper-workshop/gitops/disabled-k3s/smilr/kustomization.yaml new file mode 100644 index 00000000..3fce08e6 --- /dev/null +++ b/kube-developper-workshop/gitops/disabled-k3s/smilr/kustomization.yaml @@ -0,0 +1,12 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - ../../base/data-api + - ../../base/frontend +images: + - name: data-api + newName: {EDIT_THIS_ACR_NAME}.azurecr.io/smilr/data-api + newTag: stable + - name: frontend + newName: {EDIT_THIS_ACR_NAME}.azurecr.io/smilr/frontend + newTag: stable diff --git a/kube-developper-workshop/gitops/disabled/mongodb/kustomization.yaml b/kube-developper-workshop/gitops/disabled/mongodb/kustomization.yaml new file mode 100644 index 00000000..ce3654dc --- /dev/null +++ b/kube-developper-workshop/gitops/disabled/mongodb/kustomization.yaml @@ -0,0 +1,12 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - ../../base/mongodb + +images: + - name: notarealimage + newName: mongo:5.0 + +patches: + - overrides.yaml diff --git a/kube-developper-workshop/gitops/disabled/mongodb/overrides.yaml b/kube-developper-workshop/gitops/disabled/mongodb/overrides.yaml new file mode 100644 index 00000000..820234f2 --- /dev/null +++ b/kube-developper-workshop/gitops/disabled/mongodb/overrides.yaml @@ -0,0 +1,19 @@ +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: mongodb + namespace: default + +spec: + template: + spec: + containers: + - name: mongodb-container + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password diff --git a/kube-developper-workshop/gitops/disabled/smilr/kustomization.yaml b/kube-developper-workshop/gitops/disabled/smilr/kustomization.yaml new file mode 100644 index 00000000..3fce08e6 --- /dev/null +++ b/kube-developper-workshop/gitops/disabled/smilr/kustomization.yaml @@ -0,0 +1,12 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - ../../base/data-api + - ../../base/frontend +images: + - name: data-api + newName: {EDIT_THIS_ACR_NAME}.azurecr.io/smilr/data-api + newTag: stable + - name: frontend + newName: {EDIT_THIS_ACR_NAME}.azurecr.io/smilr/frontend + newTag: stable diff --git a/kube-developper-workshop/k3s/00-pre-reqs/readme.md b/kube-developper-workshop/k3s/00-pre-reqs/readme.md new file mode 100644 index 00000000..af62bdb8 --- /dev/null +++ b/kube-developper-workshop/k3s/00-pre-reqs/readme.md @@ -0,0 +1,66 @@ +# βš’οΈ Workshop Pre Requisites + +In this workshop you'll be creating a stand alone, single node K3s cluster on a VM. +This VM will be in essence a simulation of what it's like to setup and run a K3S cluster on your own physical device. +You'll also be interacting the cluster directly on the VM, as opposed to your local machine. +You'll be using your local machine to create the Azure resources however. + +As this is a completely hands on workshop, you will need a few things before you can start: + +- Access to an Azure Subscription where you can create resources. +- A good editor that you can SSH from, and [VS Code](https://code.visualstudio.com/) is strongly recommended + - [Visual Studio Code Remote Development extension](https://code.visualstudio.com/docs/remote/remote-overview) + - [Kubernetes extension](https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools) also highly recommended +- [Azure CLI](https://aka.ms/azure-cli) + +## 🌩️ Install Azure CLI + +To set-up the Azure CLI on your system + +On Ubuntu/Debian Linux, requires sudo: + +```bash +curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash +``` + +On MacOS, use homebrew: + +```bash +brew update && brew install azure-cli +``` + +If the commands above don't work, please refer to: [https://aka.ms/azure-cli](https://aka.ms/azure-cli) + +## πŸ” After Install - Login to Azure + +The rest of this workshop assumes you have access to an Azure subscription, and have the Azure CLI working & signed into the tenant & subscription you will be using. +Some Azure CLI commands to help you: + +- `az login` or `az login --tenant {TENANT_ID}` - Login to the Azure CLI, use the `--tenant` switch + if you have multiple accounts. +- `az account set --subscription {SUBSCRIPTION_ID}` - Set the subscription the Azure CLI will use. +- `az account show -o table` - Show the subscription the CLI is configured to use. + +## πŸ’² Variables File + +Although not essential, it's advised to create a `vars.sh` file holding all the parameters that will +be common across many of the commands that will be run. This way you have a single point of reference +for them and they can be easily reset in the event of a session timing out or terminal closing. + +Sample `vars.sh` file is shown below, feel free to use any values you wish for the resource group, +region cluster name etc. To use the file simply source it through bash with `source vars.sh`, do this +before moving to the next stage. + +> πŸ“ NOTE: The ACR name must be globally unique and not contain dashes, dots, or underscores. + +```bash +RES_GROUP="kube-workshop" +REGION="westeurope" +VM_NAME="__change_me__" +ACR_NAME="__change_me__" +``` + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Next Section ⏩](../01-cluster/readme.md) diff --git a/kube-developper-workshop/k3s/00-pre-reqs/vars.sh.sample b/kube-developper-workshop/k3s/00-pre-reqs/vars.sh.sample new file mode 100644 index 00000000..fe652372 --- /dev/null +++ b/kube-developper-workshop/k3s/00-pre-reqs/vars.sh.sample @@ -0,0 +1,4 @@ +RES_GROUP="kube-workshop" +REGION="westeurope" +VM_NAME="__change_me__" +ACR_NAME="__change_me__" \ No newline at end of file diff --git a/kube-developper-workshop/k3s/01-cluster/readme.md b/kube-developper-workshop/k3s/01-cluster/readme.md new file mode 100644 index 00000000..53678061 --- /dev/null +++ b/kube-developper-workshop/k3s/01-cluster/readme.md @@ -0,0 +1,111 @@ +# 🚦 Deploying Kubernetes + +Deploying a Kubernetes can be extremely complex, with many networking, compute and other aspects to consider. +However for the purposes of this workshop, a default and basic K3s cluster can be deployed very quickly. + +## πŸš€ Virtual Machine Deployment + +Use the following commands to create a VM and a resource group: + +```bash +# Create Azure resource group +az group create --name $RES_GROUP --location $REGION + +# Create cluster +az vm create \ + --resource-group $RES_GROUP \ + --name $VM_NAME \ + --image UbuntuLTS \ + --public-ip-sku Standard \ + --size Standard_D2s_v3 \ + --admin-username azureuser \ + --generate-ssh-keys + +# Open two additional ports on the VM, that'll be used later +az network nsg rule create --resource-group $RES_GROUP --nsg-name ${VM_NAME}NSG --name AllowNodePorts --protocol tcp --priority 1001 --destination-port-ranges 30036 30037 + +``` + +Save the VMs public IP and SSH key files for use in the next steps + +## 🌐Connect to the VM from VSCode + +To make creating files easier on the machine it's recommended to use [VS Code](https://code.visualstudio.com/) Remote extension with SSH to connect to the VM. +See the documentation [here](https://code.visualstudio.com/docs/remote/ssh) for more on developing on Remote Machines using SSH and Visual Studio Code. + +It's also highly recommended to get the [Kubernetes extension](https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools). + +## 🀘 Set up K3s cluster + +Run all of these commands inside of your VM. + +First, let's install the K3S cluster and tools in the VM: + +```sh +# Install kubectl +sudo apt-get update +sudo apt-get install -y apt-transport-https ca-certificates curl +sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg +echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list +sudo apt-get update +sudo apt-get install -y kubectl + +# Install K3S +curl -sfL https://get.k3s.io | sh - + +# Install helm +curl -s https://raw.githubusercontent.com/benc-uk/tools-install/master/helm.sh | bash + +# Optionally install Azure CLI +curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash + +``` + +> πŸ“ NOTE: Login into the Azure CLI if you've installed it. + +Let's connect your kubectl with k3s and allow your user permissions to access the cluster. + +```sh +echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc +sudo chown azureuser /etc/rancher/k3s/k3s.yaml +sudo chown azureuser /etc/rancher/k3s +``` + +Then let's set up the VM user profile for K3s to make it easier to run all the commands: + +```sh +echo "source <(kubectl completion bash)" >> ~/.bashrc +echo "alias k=kubectl" >> ~/.bashrc +echo "complete -o default -F __start_kubectl k" >> ~/.bashrc +echo "export PATH=$PATH:/home/azureuser/.local/bin" >> ~/.bashrc +``` + +Double check that everything in installed and working correctly with: + +```sh +# For bashrc changes to take affect in your current terminal, you must reload bashrc with: +. ~/.bashrc +# Try commands +k get pods -A +helm +``` + +## ⏯️ Appendix - Stopping & Starting the VM + +If you are concerned about the costs for running the VM you can stop and start it at any time. + +```bash +# Stop the VM +az vm stop --resource-group $RES_GROUP --name $AKS_NAME + +# Start the VM +az vm start --resource-group $RES_GROUP --name $AKS_NAME +``` + +> πŸ“ NOTE: Start and stop operations do take several minutes to complete, so typically you would perform +> them only at the start or end of the day. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../00-pre-reqs/readme.md) β€– [Next Section ⏩](../02-container-registry/readme.md) diff --git a/kube-developper-workshop/k3s/02-container-registry/readme.md b/kube-developper-workshop/k3s/02-container-registry/readme.md new file mode 100644 index 00000000..b9de5d25 --- /dev/null +++ b/kube-developper-workshop/k3s/02-container-registry/readme.md @@ -0,0 +1,85 @@ +# πŸ“¦ Container Registry & Images + +We will deploy & use a private registry to hold the application container images. This is not strictly +necessary as we could pull the images directly from the public, however using a private registry is +a more realistic approach. + +[Azure Container Registry](https://docs.microsoft.com/azure/container-registry/) is what we will be +using. + +## πŸš€ ACR Deployment + +Deploying a new ACR is very simple: + +```bash +az acr create --name $ACR_NAME --resource-group $RES_GROUP \ +--sku Standard \ +--admin-enabled true +``` + +> πŸ“ NOTE: When you pick a name for the resource with $ACR_NAME, this has to be **globally unique**, +> and not contain no underscores, dots, or hyphens. + +## πŸ“₯ Importing Images + +For the sake of speed and maintaining the focus on Kubernetes we will import pre-built images from +another public registry (GitHub Container Registry), rather than build them from source. + +We will cover what the application does and what these containers are for in the next section, for +now we can just import them. + +To do so we use the `az acr import` command: + +```bash +# Import application frontend container image +az acr import --name $ACR_NAME --resource-group $RES_GROUP \ +--source ghcr.io/benc-uk/smilr/frontend:stable \ +--image smilr/frontend:stable + +# Import application data API container image +az acr import --name $ACR_NAME --resource-group $RES_GROUP \ +--source ghcr.io/benc-uk/smilr/data-api:stable \ +--image smilr/data-api:stable +``` + +If you wish to check and see imported images, you can go over to the ACR resource in the Azure portal, +and into the 'Repositories' section. + +> πŸ“ NOTE: we are not using the tag `latest` which is a common mistake when working with Kubernetes +> and containers in general. + +## πŸ”Œ Connect K3s to ACR + +Kuberenetes requires a way to authenticate and access images stored in private registries. There are +a number of ways to enable Kubernetes to pull images from a private registry, however K3S provides a +simple way to configure this through the `registries.yaml`. The downside is this requires you to +manually add the file to your device/VM. + +On your VM create the `registries.yaml` with the following content: + +> πŸ“ NOTE: The password is retrieved with Azure CLI, if you don't have Azure CLI on the VM, you can +> just retrieve your ACR password from the portal and replace that section your ACR password + +```sh +# Copy the ACR name from the .env file created earlier or from Azure +ACR_NAME= +cat < /etc/rancher/k3s/registries.yaml +configs: + "$ACR_NAME.azurecr.io": + auth: + username: $ACR_NAME + password: $(az acr credential show --name $ACR_NAME --query "passwords[0].value" -o tsv) +EOT +# Verify the file was created with the right values +cat /etc/rancher/k3s/registries.yaml + +# Restart K3s for the change to take effect +sudo systemctl restart k3s; +``` + +> To read more about how `registries.yaml` works, you can checkout [Rancher Docs: Private Registry Configuration](https://rancher.com/docs/k3s/latest/en/installation/private-registry/). + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../01-cluster/readme.md) β€– [Next Section ⏩](../03-the-application/readme.md) diff --git a/kube-developper-workshop/k3s/03-the-application/readme.md b/kube-developper-workshop/k3s/03-the-application/readme.md new file mode 100644 index 00000000..4e86bc8c --- /dev/null +++ b/kube-developper-workshop/k3s/03-the-application/readme.md @@ -0,0 +1,9 @@ +# The Application + +Refer to the version in the [readme.md](../../03-the-application/readme.md) non-`k3s` +directory. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../02-container-registry/readme.md) β€– [Next Section ⏩](../04-deployment/readme.md) diff --git a/kube-developper-workshop/k3s/04-deployment/diagram.png b/kube-developper-workshop/k3s/04-deployment/diagram.png new file mode 100644 index 00000000..34e98a7c Binary files /dev/null and b/kube-developper-workshop/k3s/04-deployment/diagram.png differ diff --git a/kube-developper-workshop/k3s/04-deployment/readme.md b/kube-developper-workshop/k3s/04-deployment/readme.md new file mode 100644 index 00000000..34c41011 --- /dev/null +++ b/kube-developper-workshop/k3s/04-deployment/readme.md @@ -0,0 +1,9 @@ +# The Application + +Refer to the version in the [readme.md](../../04-deployment/readme.md) non-`k3s` +directory. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../03-the-application/readme.md) β€– [Next Section ⏩](../05-network-basics/readme.md) diff --git a/kube-developper-workshop/k3s/05-network-basics/data-api-deployment.yaml b/kube-developper-workshop/k3s/05-network-basics/data-api-deployment.yaml new file mode 100644 index 00000000..8854443b --- /dev/null +++ b/kube-developper-workshop/k3s/05-network-basics/data-api-deployment.yaml @@ -0,0 +1,28 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: {ACR_NAME}.azurecr.io/smilr/data-api:stable + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + env: + - name: MONGO_CONNSTR + value: mongodb://admin:supersecret@database diff --git a/kube-developper-workshop/k3s/05-network-basics/data-api-service.yaml b/kube-developper-workshop/k3s/05-network-basics/data-api-service.yaml new file mode 100644 index 00000000..d969b30d --- /dev/null +++ b/kube-developper-workshop/k3s/05-network-basics/data-api-service.yaml @@ -0,0 +1,15 @@ +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: NodePort + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 + nodePort: 30036 \ No newline at end of file diff --git a/kube-developper-workshop/k3s/05-network-basics/diagram.png b/kube-developper-workshop/k3s/05-network-basics/diagram.png new file mode 100644 index 00000000..cd519f8f Binary files /dev/null and b/kube-developper-workshop/k3s/05-network-basics/diagram.png differ diff --git a/kube-developper-workshop/k3s/05-network-basics/mongo-service.yaml b/kube-developper-workshop/k3s/05-network-basics/mongo-service.yaml new file mode 100644 index 00000000..139811d1 --- /dev/null +++ b/kube-developper-workshop/k3s/05-network-basics/mongo-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: database + +spec: + type: ClusterIP + selector: + app: mongodb + ports: + - protocol: TCP + port: 27017 + targetPort: 27017 diff --git a/kube-developper-workshop/k3s/05-network-basics/readme.md b/kube-developper-workshop/k3s/05-network-basics/readme.md new file mode 100644 index 00000000..d4437cd8 --- /dev/null +++ b/kube-developper-workshop/k3s/05-network-basics/readme.md @@ -0,0 +1,152 @@ +# 🌐 Basic Networking + +Pods are both ephemeral and "mortal", they should be considered effectively transient. +Kubernetes can terminate and reschedule pods for a whole range of reasons, including rolling updates, hitting resource limits, scaling up & down and other cluster operations. +With Pods being transient, you can not build a reliable architecture through addressing Pods directly (e.g. by name or IP address), because no part of a pod is static. + +Kubernetes solves this with _Services_, which act as a network abstraction over a group of pods, and have their own lifecycle. +We can use them to greatly improve what we've deployed. + +## 🧩 Deploy MongoDB Service + +Now to put a _Service_ in front of the MongoDB pods, if you want to create the service YAML yourself, you can [refer to the Kubernetes docs](https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service). + +- The type of _Service_ should be `ClusterIP` which means it's internal to the cluster only. +- The service port should be **27017**. +- The target port should be **27017**. +- Selector decides what pods are behind the service, in this case use the label `app` and the value + `mongodb`. + +> πŸ“ NOTE: Labels are optional metadata that can be added to any object in Kubernetes, they are simply key-value pairs. Labels can be used to organize and to select subsets of objects. +> The label "app" is commonly used, but has **no special meaning**, and isn't used by Kubernetes in any way + +Save your YAML into a file `mongo-service.yaml` or use the below YAML manifest for the service: + +
+Click here for the MongoDB service YAML + +```yaml +kind: Service +apiVersion: v1 + +metadata: + # We purposefully pick a different name for the service from the deployment + name: database + +spec: + type: ClusterIP + selector: + app: mongodb + ports: + - protocol: TCP + port: 27017 + targetPort: 27017 +``` + +
+ +Apply it to the cluster as before: + +```bash +kubectl apply -f mongo-service.yaml +``` + +You can use `kubectl` to examine the status of the _Service_ just like you can with _Pods_ and _Deployments_: + +```bash +# Get all services +kubectl get svc + +# Get details of a single service +kubectl describe svc {service-name} +``` + +> πŸ“ NOTE: The service called 'kubernetes' exists in every namespace and is placed there automatically, you can ignore it. + +πŸ›‘ **IMPORTANT NOTE**: As a rule it's a bad idea and generally considered an "anti-pattern" to run stateful services in Kubernetes. Managing them is complex and time consuming. +It's **strongly recommended** use PaaS data offerings which reside outside your cluster and can be managed independently and easily. +We will continue with MongoDB running in the cluster purely as a learning exercise. + +## πŸ“‘ Connect the API to MongoDB Service + +Now we have a Service in our cluster for MongoDB we can access the database using DNS rather than pod IP and if the pod(s) die or restart or move; this name remains constant. +DNS with Kubernetes is a complex topic we won't get into here, the main takeaway for now is: + +- Every _Service_ in the cluster can be resolved over DNS +- Within a _Namespace_, the _Service_ name will resolve as a simple hostname, without the need for a + DNS suffix [but other scenarios](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) + also are supported. + +Edit the the `data-api-deployment.yaml` file you created previously and change the value of the +`MONGO_CONNSTR` environmental variable. +Replace the IP address with name of the service, e.g. the connection string should look like `mongodb://admin:supersecret@database`. + +You can update the active deployment with these changes by re-running `kubectl apply -f data-api-deployment.yaml`. +Kuberenetes will perform a rolling update, if you are quick and run `kubectl get pods` you might see it taking place, i.e. a new pod starting & the old one terminating. +Again you can check the status and the logs using `kubectl`. + +## 🌍 Expose the Data API externally + +We can create a different type of _Service_ in front of the data API, in order to expose it outside of the cluster and also to the internet. +To do this use a Service with the type `NodePort`. +This service will then expose the traffic on IP address of the VM and the port specified as `nodePort`, in our case it'll be port `30036`. + +In a traditional cluster, like AKS, we would instead use _Service_ of type `LoadBalancer`. This then would be picked up by Azure and a public IP assigned and traffic routed through an Azure LoadBalancer in front of the cluster. +With a bare metal cluster there aren't any load balancers. + +> πŸ“° INFO: For more information on different service types, you can check out: [Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?](https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0) +> (We'll touch on ingress in later chapters as well.) + +We can also change the port at the _Service_ level, so the port exposed by the _Service_ doesn't need to match the one that the container is listening on. +In this case we'll re-map the port to **80**. + +Save your YAML into a file `data-api-service.yaml` from above or below. + + + +
+Click here for the data API service YAML + +```yaml +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: NodePort + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 + nodePort: 30036 +``` + +
+ +Apply it to the cluster as before: + +```bash +kubectl apply -f data-api-service.yaml +``` + +Using `kubectl get svc` check the status. Then go to the address in your browser `http://{VM_IP}:30036/api/info/` and you should get the same JSON response as before. + +Clearly this is better than what we had before, but in production you would never expose traffic directly into your pods like this. +Later we can improve this yet further, but for now it will suffice. + +> πŸ“ NOTE: If your connection is timing out, make sure that the port is exposed on your VM. + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +The resources deployed into the cluster & in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../04-deployment/readme.md) β€– [Next Section ⏩](../06-frontend/readme.md) diff --git a/kube-developper-workshop/k3s/06-frontend/diagram.png b/kube-developper-workshop/k3s/06-frontend/diagram.png new file mode 100644 index 00000000..a51a2145 Binary files /dev/null and b/kube-developper-workshop/k3s/06-frontend/diagram.png differ diff --git a/kube-developper-workshop/k3s/06-frontend/frontend-deployment.yaml b/kube-developper-workshop/k3s/06-frontend/frontend-deployment.yaml new file mode 100644 index 00000000..ef711777 --- /dev/null +++ b/kube-developper-workshop/k3s/06-frontend/frontend-deployment.yaml @@ -0,0 +1,28 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: http://{VM_IP}:30036/api diff --git a/kube-developper-workshop/k3s/06-frontend/frontend-service.yaml b/kube-developper-workshop/k3s/06-frontend/frontend-service.yaml new file mode 100644 index 00000000..05722ac1 --- /dev/null +++ b/kube-developper-workshop/k3s/06-frontend/frontend-service.yaml @@ -0,0 +1,15 @@ +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: NodePort + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 + nodePort: 30037 \ No newline at end of file diff --git a/kube-developper-workshop/k3s/06-frontend/readme.md b/kube-developper-workshop/k3s/06-frontend/readme.md new file mode 100644 index 00000000..1761f685 --- /dev/null +++ b/kube-developper-workshop/k3s/06-frontend/readme.md @@ -0,0 +1,110 @@ +# πŸ’» Adding The Frontend + +We've ignored the frontend until this point, with the API and backend in place we are finally ready to deploy it. +We need to use a _Deployment_ and _Service_ just as before. We can pick up the pace a little and setup everything we need in one go. + +For the Deployment: + +- The image needs to be `{ACR_NAME}.azurecr.io/smilr/frontend:stable`. +- The port exposed from the container should be **3000** +- An environmental variable called `API_ENDPOINT` should be passed to the container, this needs to be a URL and should point to the VM IP and the `nodePort` the API is exposed from the previous part, as follows `http://{VM_IP}:30036/api`. +- Label the pods with `app: frontend`. + +For the Service: + +- The type of _Service_ should be `NodePort` same as the data API. +- The service port should be **80**. +- The target port should be **3000**. +- The node port should be **30037**. +- Use the label `app` and the value `frontend` for the selector. + +You might like to try creating the service before deploying the pods to see what happens. +The YAML you can use for both, is provided below: + +`frontend-deployment.yaml`: + +
+Click here for the frontend deployment YAML + +```yaml +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: http://{VM_IP}:30036/api +``` + +
+ +`frontend-service.yaml`: + +
+Click here for the frontend service YAML + +```yaml +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: NodePort + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 + nodePort: 30037 +``` + +
+ +As before, the there are changes that are required to the supplied YAML, replacing anything inside `{ }` with a corresponding real value. + +## πŸ’‘ Accessing and Using the App + +Once the two YAMLs have been applied: + +- Check the service is up and running with `kubectl get svc frontend`. +- Once it is there, go to the VM IP in your browser, e.g. `http://{VM_IP}:30037/` - the application should load and the Smilr frontend is shown. + +If you want to spend a few minutes using the app, you can go to the "Admin" page, add a new event, the details don't matter but make the date range to include the current date. +And try out the feedback view and reports. Or simply be happy the app is functional and move on. + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +The resources deployed into the cluster & in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +Here we can see our two `NodePort` services, each exposed on different ports of the external VM IP. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../05-network-basics/readme.md) β€– [Next Section ⏩](../07-improvements/readme.md) diff --git a/kube-developper-workshop/k3s/07-improvements/data-api-deployment.yaml b/kube-developper-workshop/k3s/07-improvements/data-api-deployment.yaml new file mode 100644 index 00000000..b8baa6c9 --- /dev/null +++ b/kube-developper-workshop/k3s/07-improvements/data-api-deployment.yaml @@ -0,0 +1,46 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: data-api + +spec: + replicas: 2 + selector: + matchLabels: + app: data-api + template: + metadata: + labels: + app: data-api + spec: + containers: + - name: data-api-container + + image: {ACR_NAME}.azurecr.io/smilr/data-api:stable + imagePullPolicy: Always + + ports: + - containerPort: 4000 + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + port: 4000 + path: /api/health + initialDelaySeconds: 0 + periodSeconds: 5 + + env: + - name: MONGO_CONNSTR + valueFrom: + secretKeyRef: + name: mongo-creds + key: connection-string \ No newline at end of file diff --git a/kube-developper-workshop/k3s/07-improvements/frontend-deployment.yaml b/kube-developper-workshop/k3s/07-improvements/frontend-deployment.yaml new file mode 100644 index 00000000..c060a79f --- /dev/null +++ b/kube-developper-workshop/k3s/07-improvements/frontend-deployment.yaml @@ -0,0 +1,43 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + ports: + - containerPort: 3000 + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 + + env: + - name: API_ENDPOINT + value: http://{API_EXTERNAL_IP}/api diff --git a/kube-developper-workshop/k3s/07-improvements/mongo-deployment.yaml b/kube-developper-workshop/k3s/07-improvements/mongo-deployment.yaml new file mode 100644 index 00000000..8e9cf387 --- /dev/null +++ b/kube-developper-workshop/k3s/07-improvements/mongo-deployment.yaml @@ -0,0 +1,48 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password diff --git a/kube-developper-workshop/k3s/07-improvements/readme.md b/kube-developper-workshop/k3s/07-improvements/readme.md new file mode 100644 index 00000000..f0b2831a --- /dev/null +++ b/kube-developper-workshop/k3s/07-improvements/readme.md @@ -0,0 +1,135 @@ +# ✨ Improving The Deployment + +We've cut more than a few corners so far in order to simplify things and introduce concepts one at a time, now is a good time to make some simple improvements. +We'll also pick up the pace a little with slightly less hand holding. + +## 🌑️ Resource Requests & Limits + +We have not given Kubernetes any information on the resources (CPU & memory) our applications require, but we can do this two ways: + +- **Resource requests**: Used by the Kubernetes scheduler to help assign _Pods_ to a node with sufficient resources. + This is only used when starting & scheduling pods, and not enforced after they start. +- **Resource limits**: _Pods_ will be prevented from using more resources than their assigned limits. + These limits are enforced and can result in a _Pod_ being terminated. It's highly recommended to set limits to prevent one workload from monopolizing cluster resources and starving other workloads. + +It's worth reading the [Kubernetes documentation on this topic](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/), especially on the units & specifiers used for memory and CPU. + +You can specify resources of these within the pod template inside the Deployment YAML. +The `resources` section needs to go at the same level as `image`, `ports`, etc. in the spec. + +```yaml +# Resources to set on frontend & data API deployment +resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi +``` + +```yaml +# Resources to set on MongoDB deployment +resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi +``` + +> πŸ“ NOTE: If you were using VS Code to edit your YAML and had the Kubernetes extension installed you might have noticed yellow warnings in the editor. +> The lack of resource limits was the cause of this. + +Add these sections to your deployment YAML files, and reapply to the cluster with `kubectl` as before and check the status and that the pods start up. + +## πŸ’“ Readiness & Liveness Probes + +Probes are Kubernetes' way of checking the health of your workloads. There are two main types of probe: + +- **Liveness probe**: Checks if the _Pod_ is alive, _Pods_ that fail this probe will be **_terminated and restarted_**. +- **Readiness probe**: Checks if the _Pod_ is ready to **_accept traffic_**, _Services_ only sends traffic to _Pods_ which are in a ready state. + +You can [read more about probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) at the kubernetes documentation. Also [this blog post](https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html) has some excellent advice around probes, and covers some of the pitfalls of using them, particularly liveness probes. + +For this workshop we'll only set up a readiness probe, which is the most common type: + +```yaml +# Probe to add to the data API deployment in the same level as above +# Note: this container exposes a specific health endpoint +readinessProbe: + httpGet: + port: 4000 + path: /api/health + initialDelaySeconds: 0 + periodSeconds: 5 +``` + +```yaml +# Probe to add to the frontend deployment +readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 +``` + +Add these sections to your deployment YAML files, at the same level in the YAML as the resources block. +Reapply to the cluster with `kubectl` as before, and check the status and that the pods start up. + +If you run `kubectl get pods` immediately after the apply, you should see that the pods status will be "Running", but will show "0/1" in the ready column, until the probe runs & passes for the first time. + +## πŸ” Secrets + +Remember how we had the MongoDB password visible in plain text in two of our deployment YAML manifests? +Blergh! 🀒 Now is the time to address that, we can create a Kubernetes _Secret_, which is a configuration resource which can be used to store sensitive information. + +_Secrets_ can be created using a YAML file just like every resource in Kubernetes, but instead we'll use the `kubectl create` command to imperatively create the resource from the command line, as follows: + +```bash +kubectl create secret generic mongo-creds \ +--from-literal admin-password=supersecret \ +--from-literal connection-string=mongodb://admin:supersecret@database +``` + +_Secrets_ can contain multiple keys, here we add two keys one for the password called `admin-password`, and one for the connection string called `connection-string`, both reside in the new _Secret_ called `mongo-creds`. + +_Secrets_ can use used a number of ways, but the easiest way is to consume them, is as environmental variables passed into your containers. +Update the deployment YAML for your data API, and MongoDB, replace the references to `MONGO_INITDB_ROOT_PASSWORD` and `MONGO_CONNSTR` as shown below: + +```yaml +# Place this in MongoDB deployment, replacing existing reference to MONGO_INITDB_ROOT_PASSWORD +- name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password +``` + +```yaml +# Place this in data API deployment, replacing existing reference to MONGO_CONNSTR +- name: MONGO_CONNSTR + valueFrom: + secretKeyRef: + name: mongo-creds + key: connection-string +``` + +> πŸ“ NOTE: _Secrets_ are encrypted at rest by AKS however anyone with the relevant access to the cluster will be able to read the _Secrets_ (they are simply base-64 encoded) using kubectl or the Kubernetes API. +> If you want further encryption and isolation a number of options are available including Mozilla SOPS, Hashicorp Vault and Azure Key Vault. + +## πŸ” Reference Manifests + +If you get stuck and want working manifests you can refer to, they are available here: + +- [data-api-deployment.yaml](data-api-deployment.yaml) +- [frontend-deployment.yaml](frontend-deployment.yaml) +- [mongo-deployment.yaml](mongo-deployment.yaml) + - Bonus: This manifest shows how to add a probe using an executed command, rather than HTTP, use it if you wish, but it's optional. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../06-frontend/readme.md) β€– [Next Section ⏩](../08-ingress/readme.md) diff --git a/kube-developper-workshop/k3s/08-ingress/data-api-service.yaml b/kube-developper-workshop/k3s/08-ingress/data-api-service.yaml new file mode 100644 index 00000000..7b14e241 --- /dev/null +++ b/kube-developper-workshop/k3s/08-ingress/data-api-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: data-api + +spec: + type: ClusterIP + selector: + app: data-api + ports: + - protocol: TCP + port: 80 + targetPort: 4000 diff --git a/kube-developper-workshop/k3s/08-ingress/diagram.png b/kube-developper-workshop/k3s/08-ingress/diagram.png new file mode 100644 index 00000000..897bbc97 Binary files /dev/null and b/kube-developper-workshop/k3s/08-ingress/diagram.png differ diff --git a/kube-developper-workshop/k3s/08-ingress/frontend-deployment.yaml b/kube-developper-workshop/k3s/08-ingress/frontend-deployment.yaml new file mode 100644 index 00000000..eb7e5584 --- /dev/null +++ b/kube-developper-workshop/k3s/08-ingress/frontend-deployment.yaml @@ -0,0 +1,43 @@ +kind: Deployment +apiVersion: apps/v1 + +metadata: + name: frontend + +spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + + image: {ACR_NAME}.azurecr.io/smilr/frontend:stable + imagePullPolicy: Always + + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + cpu: 100m + memory: 100Mi + + readinessProbe: + httpGet: + path: / + port: 3000 + initialDelaySeconds: 0 + periodSeconds: 5 + + ports: + - containerPort: 3000 + + env: + - name: API_ENDPOINT + value: /api diff --git a/kube-developper-workshop/k3s/08-ingress/frontend-service.yaml b/kube-developper-workshop/k3s/08-ingress/frontend-service.yaml new file mode 100644 index 00000000..cb7e0dda --- /dev/null +++ b/kube-developper-workshop/k3s/08-ingress/frontend-service.yaml @@ -0,0 +1,14 @@ +kind: Service +apiVersion: v1 + +metadata: + name: frontend + +spec: + type: ClusterIP + selector: + app: frontend + ports: + - protocol: TCP + port: 80 + targetPort: 3000 diff --git a/kube-developper-workshop/k3s/08-ingress/ingress-controller.yaml b/kube-developper-workshop/k3s/08-ingress/ingress-controller.yaml new file mode 100644 index 00000000..2eec8636 --- /dev/null +++ b/kube-developper-workshop/k3s/08-ingress/ingress-controller.yaml @@ -0,0 +1,638 @@ +apiVersion: v1 +kind: Namespace +metadata: + labels: + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + name: ingress-nginx +--- +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx + namespace: ingress-nginx +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx + namespace: ingress-nginx +rules: + - apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + - endpoints + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - services + verbs: + - get + - list + - watch + - apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - list + - watch + - apiGroups: + - networking.k8s.io + resources: + - ingresses/status + verbs: + - update + - apiGroups: + - networking.k8s.io + resources: + - ingressclasses + verbs: + - get + - list + - watch + - apiGroups: + - "" + resourceNames: + - ingress-controller-leader + resources: + - configmaps + verbs: + - get + - update + - apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - apiGroups: + - coordination.k8s.io + resourceNames: + - ingress-controller-leader + resources: + - leases + verbs: + - get + - update + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission + namespace: ingress-nginx +rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx +rules: + - apiGroups: + - "" + resources: + - configmaps + - endpoints + - nodes + - pods + - secrets + - namespaces + verbs: + - list + - watch + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - list + - watch + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - apiGroups: + - "" + resources: + - services + verbs: + - get + - list + - watch + - apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - apiGroups: + - networking.k8s.io + resources: + - ingresses/status + verbs: + - update + - apiGroups: + - networking.k8s.io + resources: + - ingressclasses + verbs: + - get + - list + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission +rules: + - apiGroups: + - admissionregistration.k8s.io + resources: + - validatingwebhookconfigurations + verbs: + - get + - update +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx + namespace: ingress-nginx +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: ingress-nginx +subjects: + - kind: ServiceAccount + name: ingress-nginx + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission + namespace: ingress-nginx +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: ingress-nginx-admission +subjects: + - kind: ServiceAccount + name: ingress-nginx-admission + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ingress-nginx +subjects: + - kind: ServiceAccount + name: ingress-nginx + namespace: ingress-nginx +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ingress-nginx-admission +subjects: + - kind: ServiceAccount + name: ingress-nginx-admission + namespace: ingress-nginx +--- +apiVersion: v1 +data: + allow-snippet-annotations: "true" +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-controller + namespace: ingress-nginx +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-controller + namespace: ingress-nginx +spec: + ports: + - appProtocol: http + name: http + port: 80 + protocol: TCP + targetPort: http + nodePort: 30036 + - appProtocol: https + name: https + port: 443 + protocol: TCP + targetPort: https + selector: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + type: NodePort +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-controller-admission + namespace: ingress-nginx +spec: + ports: + - appProtocol: https + name: https-webhook + port: 443 + targetPort: webhook + selector: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + type: ClusterIP +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-controller + namespace: ingress-nginx +spec: + minReadySeconds: 0 + revisionHistoryLimit: 10 + selector: + matchLabels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + template: + metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + spec: + containers: + - args: + - /nginx-ingress-controller + - --election-id=ingress-controller-leader + - --controller-class=k8s.io/ingress-nginx + - --ingress-class=nginx + - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller + - --validating-webhook=:8443 + - --validating-webhook-certificate=/usr/local/certificates/cert + - --validating-webhook-key=/usr/local/certificates/key + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: LD_PRELOAD + value: /usr/local/lib/libmimalloc.so + image: registry.k8s.io/ingress-nginx/controller:v1.3.0@sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5 + imagePullPolicy: IfNotPresent + lifecycle: + preStop: + exec: + command: + - /wait-shutdown + livenessProbe: + failureThreshold: 5 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + name: controller + ports: + - containerPort: 80 + name: http + protocol: TCP + - containerPort: 443 + name: https + protocol: TCP + - containerPort: 8443 + name: webhook + protocol: TCP + readinessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + requests: + cpu: 100m + memory: 90Mi + securityContext: + allowPrivilegeEscalation: true + capabilities: + add: + - NET_BIND_SERVICE + drop: + - ALL + runAsUser: 101 + volumeMounts: + - mountPath: /usr/local/certificates/ + name: webhook-cert + readOnly: true + dnsPolicy: ClusterFirst + nodeSelector: + kubernetes.io/os: linux + serviceAccountName: ingress-nginx + terminationGracePeriodSeconds: 300 + volumes: + - name: webhook-cert + secret: + secretName: ingress-nginx-admission +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission-create + namespace: ingress-nginx +spec: + template: + metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission-create + spec: + containers: + - args: + - create + - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc + - --namespace=$(POD_NAMESPACE) + - --secret-name=ingress-nginx-admission + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 + imagePullPolicy: IfNotPresent + name: create + securityContext: + allowPrivilegeEscalation: false + nodeSelector: + kubernetes.io/os: linux + restartPolicy: OnFailure + securityContext: + fsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + serviceAccountName: ingress-nginx-admission +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission-patch + namespace: ingress-nginx +spec: + template: + metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission-patch + spec: + containers: + - args: + - patch + - --webhook-name=ingress-nginx-admission + - --namespace=$(POD_NAMESPACE) + - --patch-mutating=false + - --secret-name=ingress-nginx-admission + - --patch-failure-policy=Fail + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 + imagePullPolicy: IfNotPresent + name: patch + securityContext: + allowPrivilegeEscalation: false + nodeSelector: + kubernetes.io/os: linux + restartPolicy: OnFailure + securityContext: + fsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + serviceAccountName: ingress-nginx-admission +--- +apiVersion: networking.k8s.io/v1 +kind: IngressClass +metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: nginx +spec: + controller: k8s.io/ingress-nginx +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: ValidatingWebhookConfiguration +metadata: + labels: + app.kubernetes.io/component: admission-webhook + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + app.kubernetes.io/part-of: ingress-nginx + app.kubernetes.io/version: 1.3.0 + name: ingress-nginx-admission +webhooks: + - admissionReviewVersions: + - v1 + clientConfig: + service: + name: ingress-nginx-controller-admission + namespace: ingress-nginx + path: /networking/v1/ingresses + failurePolicy: Fail + matchPolicy: Equivalent + name: validate.nginx.ingress.kubernetes.io + rules: + - apiGroups: + - networking.k8s.io + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + resources: + - ingresses + sideEffects: None diff --git a/kube-developper-workshop/k3s/08-ingress/ingress.yaml b/kube-developper-workshop/k3s/08-ingress/ingress.yaml new file mode 100644 index 00000000..fad00288 --- /dev/null +++ b/kube-developper-workshop/k3s/08-ingress/ingress.yaml @@ -0,0 +1,29 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress + +metadata: + name: my-app + labels: + name: my-app + +spec: + host: + ingressClassName: nginx + rules: + - http: + paths: + - pathType: Prefix + path: "/" + backend: + service: + name: frontend + port: + number: 80 + + - pathType: Prefix + path: "/api" + backend: + service: + name: data-api + port: + number: 80 diff --git a/kube-developper-workshop/k3s/08-ingress/kuberntes-ingress.png b/kube-developper-workshop/k3s/08-ingress/kuberntes-ingress.png new file mode 100644 index 00000000..df2043fc Binary files /dev/null and b/kube-developper-workshop/k3s/08-ingress/kuberntes-ingress.png differ diff --git a/kube-developper-workshop/k3s/08-ingress/readme.md b/kube-developper-workshop/k3s/08-ingress/readme.md new file mode 100644 index 00000000..ebef7f68 --- /dev/null +++ b/kube-developper-workshop/k3s/08-ingress/readme.md @@ -0,0 +1,168 @@ +# 🌎 Ingress + +For this section we'll touch on a slightly more advanced topic: introducing an ingress controller to our cluster. +The ingress will let us further refine & improve the networking aspects of the app we've deployed. + +## πŸ—ƒοΈ Namespaces + +So far we've worked in a single _Namespace_ called `default`, but Kubernetes allows you create additional _Namespaces_ in order to logically group and separate your resources. + +> πŸ“ NOTE: Namespaces do not provide a network boundary or isolation of workloads, and the underlying resources (Nodes) remain shared. +> There are ways to achieve these outcomes, but is well beyond the scope of this workshop. + +Namespaces are simple idea but they can trip you up, you will have to add `--namespace` or `-n` to any `kubectl` commands you want to use against a particular namespace. +The following alias can be helpful to set a namespace as the default for all `kubectl` commands, meaning you don't need to add `-n`, think of it like a Kubernetes equivalent of the `cd` command. + +```bash +# Note the space at the end +alias kubens='kubectl config set-context --current --namespace ' +``` + +and to add to your `.bashrc` + +```bash +# Note the space at the end +echo "alias kubens='kubectl config set-context --current --namespace '" >> ~/.bashrc +``` + +## πŸ”€ Reconfiguring The App With Ingress + +Now we can modify the app we've deployed to route through the new ingress, but a few simple changes are required first. +As the ingress controller will be routing all requests, the services in front of the deployments should be switched back to internal i.e. `ClusterIP`. + +- Edit both the data API & frontend **service** YAML manifests, change the service type to `ClusterIP` + and remove `nodePort` field then reapply with `kubectl apply`. +- Edit the frontend **deployment** YAML manifest, change the `API_ENDPOINT` environmental variable to + use the same origin URI `/api` no need for a scheme or host. + +Apply these three changes with `kubectl` and now the app will be temporarily unavailable. Note, if you have changed namespace with `kubens` you should switch back to the **default** namespace before running the apply. + +## πŸš€ Deploying The Ingress Controller + +An [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) provides a reliable and secure way to route HTTP and HTTPS traffic into your cluster and expose your applications from a single point of ingress; hence the name. + +![Ingress controller diagram showing routing of traffic to backend services](./kuberntes-ingress.png) + +- The controller is simply an instance of a HTTP reverse proxy running in one or mode _Pods_ with a _Service_ in front of it. +- It implements the [Kubernetes controller pattern](https://kubernetes.io/docs/concepts/architecture/controller/#controller-pattern) + scanning for _Ingress_ resources to be created in the cluster, when it finds one, it reconfigures itself based on the rules and configuration within that _Ingress_, in order to route traffic. +- There are [MANY ingress controllers available](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers) + but we will use a very simple one, the [bare-metal ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters), this uses type `NodePort` services, instead of `LoadBalancer`. +- Often TLS is terminated by the ingress controller, and sometimes other tasks such as JWT validation for authentication can be done at this level. + For the sake of this workshop no TLS & HTTPS will be used due to the dependencies it requires (such as DNS, cert management, etc.). + +To greatly simplify this, we'll be getting the yaml from the url within the [bare-metal ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters) page with the below command: + +```sh +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/baremetal/deploy.yaml -o ingress-controller.yaml +``` + +And now we just need to modify the above yaml to use a specific `nodePort` instead of a randomly assigned one. +In the `ingress-controller.yaml` find the `NodePort` service and in the `appProtocol:http` add `nodePort` with port `30036`. + +Here's a snippet from `ingress-controller.yaml` with the new `nodePort`: + +```yaml +--- +spec: + ports: + - appProtocol: http + name: http + port: 80 + protocol: TCP + targetPort: http + nodePort: 30036 # This is the newly added line + - appProtocol: https + name: https + port: 443 + protocol: TCP + targetPort: https + selector: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: ingress-nginx + app.kubernetes.io/name: ingress-nginx + type: NodePort +``` + +Apply the `ingress-controller.yaml` as usual with: + +```sh +kubectl apply -f ingress-controller.yaml +``` + +From the output of the apply, you may notice that our controller has been created in a new namespace: `namespace/ingress-nginx created`. + +Check the status of both the pods and services with `kubectl get svc,pods --namespace ingress-nginx`, +ensure the pods are running and the `ingress-nginx-controller` service has port `80:30036/TCP` assigned to it in the output. + +## πŸ”€ Configuring Ingress + +The next thing is to configure the ingress by [creating an _Ingress_ resource](https://kubernetes.io/docs/concepts/services-networking/ingress/). +This can be a fairly complex resource to set-up, but it boils down to a set of HTTP path mappings (routes) and which backend service should serve them. +Here is the completed manifest file `ingress.yaml`: + +
+Click here for the Ingress YAML + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress + +metadata: + name: my-app + labels: + name: my-app + +spec: + # Important we leave this blank, as we don't have DNS configured + # Blank means these rules will match ALL HTTP requests hitting the controller IP + host: + # This is important and required since Kubernetes 1.22 + ingressClassName: nginx + rules: + - http: + paths: + # Routing for the frontend + - pathType: Prefix + path: "/" + backend: + service: + name: frontend + port: + number: 80 + + # Routing for the API + - pathType: Prefix + path: "/api" + backend: + service: + name: data-api + port: + number: 80 +``` + +
+ +Apply the same as before with `kubectl`, validate the status with: + +```bash +kubectl get ingress +``` + +Now both applications should be running on: `http://{VM_IP}:30036` + +Visit the above url in your browser, if you check the "About" screen and click the "More Details" link it should take you to the API, which should now be served from the same IP as the frontend. + +## πŸ–ΌοΈ Cluster & Architecture Diagram + +We've reached the final state of the application deployment. +The resources deployed into the cluster and in Azure at this stage can be visualized as follows: + +![architecture diagram](./diagram.png) + +This is a slightly simplified version from previously, and the _Deployment_ objects are not shown. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../07-improvements/readme.md) β€– [Next Section ⏩](../09-extra-advanced/readme.md) diff --git a/kube-developper-workshop/k3s/09-extra-advanced/mongo-statefulset.yaml b/kube-developper-workshop/k3s/09-extra-advanced/mongo-statefulset.yaml new file mode 100644 index 00000000..3f463156 --- /dev/null +++ b/kube-developper-workshop/k3s/09-extra-advanced/mongo-statefulset.yaml @@ -0,0 +1,63 @@ +kind: StatefulSet +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + serviceName: mongodb + replicas: 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password + + volumeMounts: + - name: mongo-data + mountPath: /data/db + + volumeClaimTemplates: + - metadata: + name: mongo-data + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: default + resources: + requests: + storage: 500M diff --git a/kube-developper-workshop/k3s/09-extra-advanced/readme.md b/kube-developper-workshop/k3s/09-extra-advanced/readme.md new file mode 100644 index 00000000..6ed829a9 --- /dev/null +++ b/kube-developper-workshop/k3s/09-extra-advanced/readme.md @@ -0,0 +1,304 @@ +# 🀯 Scaling, Stateful Workloads & Helm + +This final section touches on some slightly more advanced and optional concepts we've skipped over. +They aren't required to get a basic app up & running, but generally come up in practice and real +world use of Kubernetes. + +Feel free to do as much or as little of this section as you wish. + +## πŸ“ˆ Scaling + +Scaling is a very common topic and is always required in some form to meet business demand, handle +peak load and maintain application performance. There's fundamentally two approaches: manually scaling +and using dynamic auto-scaling. Along side that there are two dimensions to consider: + +- **Horizontal scaling**: This is scaling the number of application _Pods_, within the limits of the + resources available in the cluster. +- **Vertical or cluster scaling**: This is scaling the number of _Nodes_ in the cluster, and therefore + the total resources available. We won't be looking at this here, but you can [read the docs](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler) + if you want to know more. + +Since the k3s cluster is only running a single node, you'll only be able to take advantage of +**horizontal scaling**. + +Scaling stateless applications manually can be as simple as running the command to update the number +of replicas in a _Deployment_, for example: + +```bash +kubectl scale deployment data-api --replicas 4 +``` + +Naturally this can also be done by updating the `replicas` field in the _Deployment_ manifest and +applying it. + +πŸ§ͺ **Experiment**: Try scaling the data API to a large number of pods e.g. 50 or 60 to see what +happens? If some of the _Pods_ remain in a "Pending" state can you find out the reason why? What +effect does changing the resource requests (for example increasing the memory to 600Mi) have on this? + +## 🚦 Autoscaling + +Horizontal auto scaling is performed with the _Horizontal Pod Autoscaler_ which you can [read about here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). +In essence it watches metrics emitted from the pods and other resources, and based on thresholds you +set, it will modify the number of replicas dynamically. + +To set up an _Horizontal Pod Autoscaler_ you can give it a deployment and some simple targets, as +follows: + +```bash +kubectl autoscale deployment data-api --cpu-percent=50 --min=2 --max=10 +``` + +
+This command is equivalent to deploying this HorizontalPodAutoscaler resource + +```yaml +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v1 +metadata: + name: data-api +spec: + maxReplicas: 10 + minReplicas: 2 + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: data-api + targetCPUUtilizationPercentage: 50 +``` + +
+ +Run this in a separate terminal window to watch the status and number of pods: + +```bash +watch -n 3 kubectl get pods +``` + +Now generate some fake load by hitting the `/api/info` endpoint with lots of requests. We use a tool +called `hey` to do this easily and run 20 concurrent requests for 3 minutes + +```bash +wget wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64 +chmod +x hey_linux_amd64 +./hey_linux_amd64 -z 180s -c 20 http://{VM_IP}:30036/api/info +``` + +After about 1~2 mins you should see new data-api pods being created. Once the `hey` command completes +and the load stops, it will probably be around ~5 mins before the pods scale back down to their original +number. + +## πŸ›’οΈ Improving The MongoDB Backend + +There's two very major problems with our backend database: + +- There's only a single instance, i.e. one Pod, introducing a serious single point of failure. +- The data held by MongoDB is ephemeral and if the Pod was terminated for any reason, we'd lose all + application data. Not very good! + +πŸ›‘ **IMPORTANT NOTE**: As a rule it's a bad idea and an "anti-pattern" to run stateful services in +Kubernetes. Managing them is complex and time consuming. It's **strongly recommended** use PaaS data +offerings which reside outside your cluster and can be managed independently and easily. We will +continue to keep MongoDB running in the cluster purely as a learning exercise. + +We can’t simply horizontally scale out the MongoDB _Deployment_ with multiple _Pod_ replicas as it +is stateful, i.e. it holds data and state. We'd create a ["split brain" situation](https://www.45drives.com/community/articles/what-is-split-brain/) +as requests are routed to different Pods. + +Kubernetes does provide a [feature](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) +called _StatefulSets_ which greatly helps with the complexities of running multiple stateful services +across in a cluster + +⚠️ HOWEVER! _StatefulSets_ are not a magic wand - any stateful workload such as a database (e.g. MongoDB), +**still needs to be made aware** it is running in multiple places and handle the data +synchronization/replication. This can be setup for MongoDB, but is deemed too complex for this workshop. + +However we can address the issue of data persistence. + +πŸ§ͺ **Optional Experiment**: Try using the app and adding an event using the "Admin" screens, then +run `kubectl delete pod {mongo-pod-name}` You will see that Kubernetes immediately restarts it. +However when the app recovers and reconnects to the DB, you will see the data you created is gone. + +To resolve the data persistence issues, we need do three things: + +- Change the MongoDB _Deployment_ to a _StatefulSet_ with a single replica. +- Add a `volumeMount` to the container mapped to the `/data/db` filesystem, which is where the mongodb process stores its data. +- Add a `volumeClaimTemplate` to dynamically create a _PersistentVolume_ and a _PersistentVolumeClaim_ + for this _StatefulSet_. Use the "local-path" `StorageClass` and request a 500M volume which is + dedicated with the "ReadWriteOnce" access mode. + +The relationships between these can be explained with this diagram: + +![persistent volume claims](statefulset-local-storage.png) + +_PersistentVolumes_, _PersistentVolumeClaims_, _StorageClasses_, etc. are a deep and complex topics in Kubernetes. If you want begin reading about them there are masses of information in [the docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). +However it is suggested for now simply take the YAML below and save it as `mongo-statefulset.yaml`: + +
+Completed MongoDB StatefulSet YAML manifest + +```yaml +kind: StatefulSet +apiVersion: apps/v1 + +metadata: + name: mongodb + +spec: + serviceName: mongodb + replicas: 1 # Important we leave this as 1 + selector: + matchLabels: + app: mongodb + template: + metadata: + labels: + app: mongodb + spec: + containers: + - name: mongodb-container + + image: mongo:5.0 + imagePullPolicy: Always + + ports: + - containerPort: 27017 + + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + cpu: 500m + memory: 300Mi + + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + + env: + - name: MONGO_INITDB_ROOT_USERNAME + value: admin + - name: MONGO_INITDB_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mongo-creds + key: admin-password + + volumeMounts: + - name: mongo-data + mountPath: /data/db + + volumeClaimTemplates: + - metadata: + name: mongo-data + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: local-path + resources: + requests: + storage: 500M +``` + +
+ +Remove the old deployment with `kubectl delete deployment mongodb` and apply the new +`mongo-statefulset.yaml` file. Some comments: + +- When you run `kubectl get pods` you will see the pod name ends `-0` rather than the random hash. +- Running `kubectl get pv,pvc` you will see the new _PersistentVolume_ and _PersistentVolumeClaim_ + that have been created. The _Pod_ might take a little while to start while the volume is created, + and is "bound" to the _Pod_. + +If you repeat the experiment above, you should see that the data is maintained after you delete the +`mongodb-0` pod and it restarts. + +## ⛑️ Introduction to Helm + +[Helm is an CNCF project](https://helm.sh/) which can be used to greatly simplify deploying applications +to Kubernetes, either applications written and developed in house, or external 3rd party software and +tools. + +- Helm simplifies deployment into Kubernetes using _charts_, when a chart is deployed it is refereed + to as a _release_. +- A _chart_ consists of one or more Kubernetes YAML templates + supporting files. +- Helm charts support dynamic parameters called _values_. Charts expose a set of default _values_ + through their `values.yaml` file, and these _values_ can be set and over-ridden at _release_ time. +- The use of _values_ is critical for automated deployments and CI/CD. +- Charts can referenced through the local filesystem, or in a remote repository called a _chart repository_. + The can also be kept in a container registry but that is an advanced and experimental topic. +- To use Helm, the Helm CLI tool `helm` is required. + +Well add the Helm chart repository for the ingress we will be deploying, this is done with the +`helm repo` command. This is a public repo and chart of the extremely popular NGINX ingress controller +(more on that below). + +## πŸ’₯ Installing The App with Helm + +The Smilr app we have been working with, comes with a Helm chart, which you can take a look at here, +[Smilr Helm Chart](https://github.com/benc-uk/smilr/tree/master/kubernetes/helm/smilr). + +> ⚠️ WARNING: This helm chart will deploy the solution with load-balancers, not nodePorts, so it will +> never be exposed outside of the K3s cluster. + +With this we can deploy the entire app, all the deployments, pods, services, ingress, etc. with a single +command. Naturally if we were to have done this from the beginning there wouldn't have been much scope for learning! + +However as this is the final section, now might be a good time to try it. Due to some limitations +(mainly the lack of public DNS), only one deployment of the app can function at any given time. So +you will need to remove what have currently deployed, by running: + +```bash +kubectl delete deploy,sts,svc,ingress-nginx --all +``` + +Fetch the chart and download it locally, this is because the chart isn't published in a Helm repo: + +```bash +curl -sL https://github.com/benc-uk/smilr/releases/download/2.9.8/smilr-chart.tar.gz | tar -zx +``` + +Create a values file for your release: + +```yaml +registryPrefix: {ACR_NAME}.azurecr.io/ + +ingress: + className: nginx + +dataApi: + imageTag: stable + replicas: 2 + +frontend: + imageTag: stable + replicas: 1 + +mongodb: + enabled: true +``` + +Save it as `my-values.yaml`, then run a command to tell Helm to fetch any dependencies. In this case +the Smilr chart uses the [Bitnami MongoDB chart](https://github.com/bitnami/charts/tree/master/bitnami/mongodb). +To fetch/update this simply run: + +```bash +helm dependency update ./smilr +``` + +Finally we are ready to deploy the Smilr app using Helm, the release name can be anything you wish, +and you should point to the local directory where the chart has been downloaded and extracted: + +```bash +helm install myapp ./smilr --values my-values.yaml +``` + +Validate the deployment as before with `helm` and `kubectl` and check you can access the app in the +browser. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../09-extra-advanced/readme.md) β€– [Next Section ⏩](../10-gitops-flux/readme.md) diff --git a/kube-developper-workshop/k3s/09-extra-advanced/statefulset-local-storage.png b/kube-developper-workshop/k3s/09-extra-advanced/statefulset-local-storage.png new file mode 100644 index 00000000..c46702fe Binary files /dev/null and b/kube-developper-workshop/k3s/09-extra-advanced/statefulset-local-storage.png differ diff --git a/kube-developper-workshop/k3s/10-gitops-flux/base/deployment.yaml b/kube-developper-workshop/k3s/10-gitops-flux/base/deployment.yaml new file mode 100644 index 00000000..a90bb7e5 --- /dev/null +++ b/kube-developper-workshop/k3s/10-gitops-flux/base/deployment.yaml @@ -0,0 +1,22 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver +spec: + selector: + matchLabels: + app: webserver + template: + metadata: + labels: + app: webserver + spec: + containers: + - name: webserver + image: nginx + resources: + limits: + memory: "128Mi" + cpu: "500m" + ports: + - containerPort: 80 diff --git a/kube-developper-workshop/k3s/10-gitops-flux/base/kustomization.yaml b/kube-developper-workshop/k3s/10-gitops-flux/base/kustomization.yaml new file mode 100644 index 00000000..9c2d28b0 --- /dev/null +++ b/kube-developper-workshop/k3s/10-gitops-flux/base/kustomization.yaml @@ -0,0 +1,4 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - deployment.yaml diff --git a/kube-developper-workshop/k3s/10-gitops-flux/gitops.png b/kube-developper-workshop/k3s/10-gitops-flux/gitops.png new file mode 100644 index 00000000..d919ba84 Binary files /dev/null and b/kube-developper-workshop/k3s/10-gitops-flux/gitops.png differ diff --git a/kube-developper-workshop/k3s/10-gitops-flux/overlay/kustomization.yaml b/kube-developper-workshop/k3s/10-gitops-flux/overlay/kustomization.yaml new file mode 100644 index 00000000..24a27535 --- /dev/null +++ b/kube-developper-workshop/k3s/10-gitops-flux/overlay/kustomization.yaml @@ -0,0 +1,18 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +# Reference to a base kustomization directory +resources: + - ../base + +# You can add suffixes and prefixes +nameSuffix: -dev + +# Modify the image name or tags +images: + - name: nginx + newTag: 1.21-alpine + +# Apply patches to override and set other values +patches: + - ./override.yaml diff --git a/kube-developper-workshop/k3s/10-gitops-flux/overlay/override.yaml b/kube-developper-workshop/k3s/10-gitops-flux/overlay/override.yaml new file mode 100644 index 00000000..2ddb2376 --- /dev/null +++ b/kube-developper-workshop/k3s/10-gitops-flux/overlay/override.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver + +spec: + template: + spec: + containers: + - name: webserver + resources: + limits: + cpu: 330m + env: + - name: SOME_ENV_VAR + value: Hello! diff --git a/kube-developper-workshop/k3s/10-gitops-flux/readme.md b/kube-developper-workshop/k3s/10-gitops-flux/readme.md new file mode 100644 index 00000000..d5baf2fa --- /dev/null +++ b/kube-developper-workshop/k3s/10-gitops-flux/readme.md @@ -0,0 +1,336 @@ +# 🧬 GitOps & Flux + +This is an advanced optional section going into two topics; Kustomize and also GitOps, using FluxCD. + +## πŸͺ“ Kustomize + +Kustomize is a tool for customizing Kubernetes configurations. + +Kustomize traverses Kubernetes manifests to add, remove or update configuration options. It is +available both as a [standalone binary](https://kubectl.docs.kubernetes.io/installation/kustomize/) +and as a native feature of kubectl. It can be thought of as similar to Helm where it provides a means +to template and parameterize Kubernetes manifests. + +Kustomize works by looking for `kustomization.yaml` files and operating on their contents. + +[These slides](https://speakerdeck.com/spesnova/introduction-to-kustomize) provide a fairly good +introduction. + +To demonstrate Kustomize in practice, we can carry out a simple exercise, create a new directory +called `base`. + +Place the the following two files into it: + +
+Contents of base/deployment.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver +spec: + selector: + matchLabels: + app: webserver + template: + metadata: + labels: + app: webserver + spec: + containers: + - name: webserver + image: nginx + resources: + limits: + memory: "128Mi" + cpu: "500m" + ports: + - containerPort: 80 +``` + +
+ +
+Contents of base/kustomization.yaml + +```yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - deployment.yaml +``` + +
+ +Now run kustomize via kubectl, giving it the path to the base directory as follows: + +```bash +kubectl kustomize ./base +``` + +You will see the YAML printed to stdout, as we've not provided any changes in the `kustomization.yaml` +all we get is a 1:1 version of the `deployment.yaml` file. This isn't very useful! 😬 + +To better understand what Kustomize can do, create a second directory at the same level as `base` +called `overlay`. + +
+Contents of overlay/override.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: webserver + +spec: + template: + spec: + containers: + - name: webserver + resources: + limits: + cpu: 330m + env: + - name: SOME_ENV_VAR + value: Hello! +``` + +
+ +
+Contents of overlay/kustomization.yaml + +```yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +# Reference to a base kustomization directory +resources: + - ../base + +# You can add suffixes and prefixes +nameSuffix: -dev + +# Modify the image name or tags +images: + - name: nginx + newTag: 1.21-alpine + +# Apply patches to override and set other values +patches: + - ./override.yaml +``` + +
+ +Some points to highlight: + +- The _Kustomization_ adds a suffix to the names of resources. +- Also the _Kustomization_ changes the image tag to reference a specific tag. +- The patch `override.yaml` file looks a little like a regular Kubernetes _Deployment_ but it only + contains the part that will be patched/overlayed onto the base resource. On its own it's not a + valid manifest. + - The patch file sets fields in the base _Deployment_ such as changing the resource limits and + adding an extra environmental variable. + +See the [reference docs](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) for +all the options available in the `kustomization.yaml` file. + +The file & directory structure should look as follows: + +```text +β”œβ”€β”€ base +β”‚ β”œβ”€β”€ deployment.yaml +β”‚ └── kustomization.yaml +└── overlay + β”œβ”€β”€ kustomization.yaml + └── override.yaml +``` + +> πŸ“ NOTE: The names "base" and "overlay" are not special, often "environments" is used instead of +> "overlay", with sub-directories for each environment + +Now running: + +```bash +kubectl kustomize ./overlay +``` + +You will now see the overrides and modifications from the overlay applied to the base resources. With +the modified nginx image tag, different resource limits and additional env var. + +This could be applied to the cluster with the following command `kubectl -k ./overlay apply`, however +there is no need to do this. + +An interesting feature of kustomize you may want to check out is [variable substitution](https://fluxcd.io/flux/components/kustomize/kustomization/#variable-substitution). + +## GitOps & Flux + +GitOps is a methodology where you declaratively describe the entire desired state of your system using +git. This includes the apps, config, dashboards, monitoring and everything else. This means you can +use git branches and PR processes to enforce control of releases and provide traceability and +transparency. + +![gitops](./gitops.png) + +Kubernetes doesn't support this concept out of the box, it requires special controllers to be deployed +and manage this process. These controllers run inside the cluster, monitor git repositories for changes +and then make the required updates to the state of the cluster, through a process called reconciliation. + +We will use the popular project [FluxCD](https://fluxcd.io/) (also just called Flux or Flux v2), however +other projects are available such as ArgoCD and support from GitLab. + +As GitOps is a "pull" vs "push" approach, it also allows you to lock down your Kubernetes cluster, +and prevent developers and admins making direct changes with kubectl. + +> πŸ“ NOTE: GitOps is a methodology and an approach, it is not the name of a product. + +### πŸ’½ Install Flux into K3s VM + +You can install the [Flux](https://fluxcd.io/flux/installation/) CLI with: + +```sh + curl -s https://fluxcd.io/install.sh | sudo bash + # Flux auto complete to .bashrc + echo "command -v flux >/dev/null && . <(flux completion bash)" >> ~/.bashrc +. ~/.bashrc + +``` + +Before we configure anything GitOps needs a git repo to work against. We'll use a fork of this repo, +to set this up: + +- Got to the repo for this workshop +- Fork the repo to your own personal GitHub account, by clicking the 'Fork' button near the top right. + +Now to install and set up Flux in your cluster, run the following command, replacing the `{YOUR_GITHUB_USER}` +part with your GitHub username you used for the fork: + +```bash +# Install flux in the cluster, create flux pods, ect. +flux install + +flux create source git kubeworkshop \ + --url="https://github.com/{YOUR_GITHUB_USER}/kube-workshop" \ + --branch=main \ + --interval=1m + +flux create kustomization apps \ + --path="gitops/apps" \ + --source=kubeworkshop \ + --prune=true \ + --interval=1m +``` + +Check the status of Flux with the following commands: + +```bash +kubectl get kustomizations -A + +flux get kustomization + +kubectl get gitrepo -A + +kubectl get pod -n flux-system +``` + +Good for troubleshooting: + +```sh +flux logs +kubectl get events -n flux-system +``` + +> More tips and tricks: [Flux Troubleshooting cheatsheet](https://fluxcd.io/docs/cheatsheets/troubleshooting/#getting-basic-information). + +You should also see a new namespace called "hello-world", check with `kubectl get ns` this has been +created by the `gitops/apps/hello-world.yaml` file in the repo and automatically applied by Flux. + +In addition, your cluster now has flux components installed, such as pods, which you can view with +`kubectl get pods -n flux-system`. + +### πŸš€ Deploying Resources + +Clone the kube-workshop repo you forked earlier and open the directory in VS Code or other editor. + +If you recall from the bootstrap command earlier we gave Flux a path within the repo to use and look +for configurations, which was `gitops/apps` directory. The contents of the whole of the `gitops` +directory is shown here. + +```text +gitops + β”œβ”€β”€ apps + β”‚ └── hello-world.yaml + β”œβ”€β”€ base + β”‚ β”œβ”€β”€ data-api + β”‚ β”‚ β”œβ”€β”€ deployment.yaml + β”‚ β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ β”‚ └── service.yaml + β”‚ β”œβ”€β”€ frontend + β”‚ β”‚ β”œβ”€β”€ deployment.yaml + β”‚ β”‚ β”œβ”€β”€ ingress.yaml + β”‚ β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ β”‚ └── service.yaml + β”‚ └── mongodb + β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ └── mongo-statefulset.yaml + └── disabled-k3s + β”œβ”€β”€ mongodb + β”‚ β”œβ”€β”€ kustomization.yaml + β”‚ └── overrides.yaml + └── smilr + └── kustomization.yaml +``` + +The base directory provides us a library of Kustomization based resources we can use, but as it's +outside of the `gitops/apps` path they will not be picked up by Flux. + +⚠️ **STOP!** Before we proceed, ensure the `mongo-creds` _Secret_ from the previous sections is still +in the default namespace. If you have deleted it, hop back to [section 7](../07-improvements/readme.md) +and quickly create it again. It's just a single command. Creating _Secrets_ using the GitOps approach +is problematic, as they need to be committed into a code repo. Flux supports solutions to this, such +as using [SOPS](https://fluxcd.io/docs/guides/mozilla-sops/) and +[Sealed Secrets](https://fluxcd.io/docs/guides/sealed-secrets/). For an intro such as this workshop, +they require too much extra setup, so we will skip over them. + +First let's deploy MongoDB using Flux: + +- Copy the `monogodb/` directory from "disabled-k3s" to "apps". + - Note the `kustomization.yaml` in here is pointing at the base directory `../../base/mongodb` and + overlaying it. +- `git commit` these changes to the main branch and push up to GitHub. +- Wait for ~1 minute for Flux to rescan the git repo. +- Check for any errors with `kubectl get kustomizations -A`. +- Check the default namespace for the new MongoDB StatefulSet and Pod using + `kubectl get sts,pods -n default`. + +Next deploy the Smilr app: + +- Copy the `smilr/` directory from `disabled-k3s` to `apps`. + - Note the `kustomization.yaml` in here is pointing at **several** base directories, for the app's + data-api and frontend. +- Edit the ACR name in the `gitops/apps/smilr/kustomization.yaml` file. +- `git commit` these changes to the main branch and push up to GitHub. +- Wait for ~1 minute for Flux to rescan the git repo. +- Check for any errors with `kubectl get kustomizations -A`. +- Check the default namespace for the new resources using `kubectl get deploy,pods,ingress -n default`. + +In the `smilr` folder we're using [kustomize patching](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/#patch-using-path-json6902) +to modify the deployments to work on our k3s clusters. + +If you encounter problems or want to force the reconciliation you can use the `flux` CLI, e.g. +`flux reconcile source git kubeworkshop`. + +If we wanted to deploy this app across multiple environments or multiple times, we could create +sub-directories under `apps/`, each containing different Kustomizations and modifying the deployment +to suit that environment. + +πŸ§ͺ **Experiment**: Try deleting one of the deployments and watch it be brought back to life with +`flux` reconcile. You can speed up the recreation with `flux reconcile kustomization apps`. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../09-extra-advanced/readme.md) β€– [Next Section ⏩](../11-cicd-actions/readme.md) diff --git a/kube-developper-workshop/k3s/11-cicd-actions/readme.md b/kube-developper-workshop/k3s/11-cicd-actions/readme.md new file mode 100644 index 00000000..cbf6d4ac --- /dev/null +++ b/kube-developper-workshop/k3s/11-cicd-actions/readme.md @@ -0,0 +1,138 @@ +# πŸ‘· CI/CD with Kubernetes + +This is an optional section detailing how to set up a continuous integration (CI) and continuous +deployment (CD) pipeline, which will deploy to Kubernetes using Helm. + +There are many CI/CD solutions available, we will use GitHub Actions, as it's easy to set up and most +developers will already have GitHub accounts. It assumes familiarity y with git and basic GitHub usage +such as forking & cloning. + +> πŸ“ NOTE: This is not intended to be full guide or tutorial on GitHub Actions, you would be better +> off starting [here](https://docs.github.com/en/actions/learn-github-actions) +> or [here](https://docs.microsoft.com/en-us/learn/paths/automate-workflow-github-actions/?source=learn) + +## πŸ”° Get Started with GitHub Actions + +We'll use a fork of this repo in order to set things up, but in principle you could also start with +an new/empty repo on GitHub. + +- Go to the repo for this workshop . +- Fork the repo to your own personal GitHub account, by clicking the 'Fork' button near the top right. +- Clone the forked repo from GitHub using git to your local machine. + +Inside the `.github/workflows` directory, create a new file called `build-release.yaml` and paste in +the contents: + +> πŸ“ NOTE: This is special directory path used by GitHub Actions! + +```yaml +# Name of the workflow +name: CI Build & Release + +# Triggers for running +on: + workflow_dispatch: # This allows manually running from GitHub web UI + push: + branches: ["main"] # Standard CI trigger when main branch is pushed + +# One job for building the app +jobs: + buildJob: + name: "Build & push images" + runs-on: ubuntu-latest + steps: + # Checkout code from another repo on GitHub + - name: "Checkout app code repo" + uses: actions/checkout@v2 + with: + repository: benc-uk/smilr +``` + +The comments in the YAML should hopefully explain what is happening. But in summary this will run a +short single step job that just checks out the code of the Smilr app repo. The name and filename do +not reflect the current function, but the intent of what we are building towards. + +Now commit the changes and push to the main branch, yes this is not a typical way of working, but +adding a code review or PR process would merely distract from what we are doing. + +The best place to check the status is from the GitHub web site and in the 'Actions' within your forked +repo, e.g. `https://github.com/{your-github-user}/kube-workshop/actions` you should be able to look at the workflow run, the status plus output & other details. + +## ⌨️ Set Up GitHub CLI + +Install the GitHub CLI, this will make setting up the secrets required in the next part much more simple. All commands below assume you are running them from within the path of the cloned repo on your local machine. + +- On MacOS: [https://github.com/cli/cli#macos](https://github.com/cli/cli#macos) +- On Ubuntu/WSL: `curl -s https://raw.githubusercontent.com/benc-uk/tools-install/master/kubectl.sh | bash` + +Now login using the GitHub CLI, follow the authentication steps when prompted: + +```bash +gh auth login +``` + +Once the CLI is set up it, we can use it to create a [secret](https://docs.github.com/en/actions/security-guides/encrypted-secrets) +within your repo, called `ACR_PASSWORD`. We'll reference this secret in the next section. This combines +the Azure CLI and GitHub CLI into one neat way to get the credentials: + +```bash +gh secret set ACR_PASSWORD --body "$(az acr credential show --name $ACR_NAME --query "passwords[0].value" -o tsv)" +``` + +## πŸ“¦ Add CI Steps For Image Building + +The workflow, doesn't really do much, so let's update the workflow YAML to carry out a build and push +of the application container images. We can do this using the code we've checked out in the previous +workflow step. + +Add this as the YAML top level, e.g just under the `on:` section, change the `__YOUR_ACR_NAME__` string +to the name of the ACR you deployed previously (do not include the azurecr.io part). + +```yaml +env: + ACR_NAME: __YOUR_ACR_NAME__ + IMAGE_TAG: ${{ github.run_id }} +``` + +Add this section under the "Checkout app code repo" step in the job, it will require indenting to the +correct level: + +```yaml +- name: "Authenticate to access ACR" + uses: docker/login-action@master + with: + registry: ${{ env.ACR_NAME }}.azurecr.io + username: ${{ env.ACR_NAME }} + password: ${{ secrets.ACR_PASSWORD }} + +- name: "Build & Push: data API" + run: | + docker buildx build . -f node/data-api/Dockerfile \ + -t $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG \ + -t $ACR_NAME.azurecr.io/smilr/data-api:latest + docker push $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG + +- name: "Build & Push: frontend" + run: | + docker buildx build . -f node/frontend/Dockerfile \ + -t $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG \ + -t $ACR_NAME.azurecr.io/smilr/frontend:latest + docker push $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG +``` + +Save the file, commit and push to main just as before. Then check the status from the GitHub UI and +'Actions' page of your forked repo. + +The workflow now does three important things: + +- Authenticate to "login" to the ACR. +- Build the **smilr/data-api** image and tag as `latest` and also the GitHub run ID, which is unique + to every run of the workflow. Then push these images to the ACR. +- Do exactly the same for the **smilr/frontend** image. + +The "Build & push images" job and the workflow should take around 2~3 minutes to complete. + +## Navigation + +[Return to Main Index 🏠](../../readme.md) +[Previous Section βͺ](../10-gitops-flux/readme.md) diff --git a/kube-developper-workshop/readme.md b/kube-developper-workshop/readme.md new file mode 100644 index 00000000..06060188 --- /dev/null +++ b/kube-developper-workshop/readme.md @@ -0,0 +1,119 @@ +# Kubernetes Developer Workshop + +This is a hands-on, technical workshop intended / hack to get comfortable working with Kubernetes, and +deploying and configuring applications. It should take roughly 6~8 hours to complete the main set of +sections, but this is very approximate. This workshop is intended partially as a companion to this +[Kubernetes Technical Primer](https://github.com/benc-uk/kube-primer) which can be read through, +referenced or used to get an initial grounding on the concepts. + +This workshop is very much designed for software engineers & developers with little or zero Kubernetes +experience, but wish to get hands on and learn how to deploy and manage applications. It is not +focused on the administration, network configuration & day-2 operations of Kubernetes itself, so some +aspects may not be relevant to dedicated platform/infrastructure engineers. + +> πŸ“ NOTE: if you've never used Kubernetes before, it is recommended to read the 'Introduction To Kubernetes' section in [Kubernetes Technical Primer PDF]() + +The application used will be one that has already been written and built, so no application code will +need to be written. + +If you get stuck, the [GitHub source repo for this workshop](https://github.com/benc-uk/kube-workshop) +contains example code, and working files for most of the sections. + +To start with the workshop, first you need to choose which path you'd like to follow, either using AKS or hosting Kubernetes yourself in a VM. If you are unsure you should pick AKS. + +## Path 1: Azure Kubernetes Service (AKS) + +In this path you'll be using AKS to learn how to work with Kubernetes running as a managed service in Azure. + +> πŸ“ NOTE: This section assumes a relative degree of comfort in using Azure for sections 2 and 3. + +Sections / modules: + +- [βš’οΈ Workshop Pre Requisites](00-pre-reqs/readme.md) - Covering the pre set up and tools that will be + needed. +- [🚦 Deploying Kubernetes](01-cluster/readme.md) - Deploying AKS, setting up kubectl and accessing + the cluster. +- [πŸ“¦ Container Registry & Images](02-container-registry/readme.md) - Deploying the registry and importing + images. +- [❇️ Overview Of The Application](03-the-application/readme.md) - Details of the application to be + deployed. +- [πŸš€ Deploying The Backend](04-deployment/readme.md) - Laying down the first two components and + introduction to Deployments and Pods. +- [🌐 Basic Networking](05-network-basics/readme.md) - Introducing Services to provide network access. +- [πŸ’» Adding The Frontend](06-frontend/readme.md) - Deploying the frontend to the app and wiring it + up. +- [✨ Improving The Deployment](07-improvements/readme.md) - Recommended practices; resource limits, + probes and secrets. +- [🌎 Helm & Ingress](08-helm-ingress/readme.md) - Finalizing the application architecture using ingress. + +### 🍡 AKS Optional Sections + +These can be considered bonus sections, and are entirely optional. It is not expected that all these sections would be attempted, and they do not run in order. + +- [🀯 Scaling, Stateful Workloads & Helm](09-extra-advanced/readme.md) - Scaling (manual & auto), + stateful workloads and persitent volumes, plus more Helm. +- [🧩 Kustomize & GitOps](10-gitops-flux/readme.md) - Introduction to Kustomize and deploying apps + through GitOps with Flux. +- [πŸ‘· CI/CD with Kubernetes](11-cicd-actions/readme.md) - How to manage CI/CD pipelines using Github + Actions. + +## Path 2: Single node K3S cluster on a VM + +In this path you'll learn to use Kubernetes as if you were running it on a on-premises machine, including configuring the computer with the required set up manually. + +Sections / modules: + +- [βš’οΈ Workshop Pre Requisites](k3s/00-pre-reqs/readme.md) - Covering the pre set up and tools that + will be needed. +- [🚦 Deploying Kubernetes](k3s/01-cluster/readme.md) - Deploying the VM, setting up kubectl and accessing + the cluster. +- [πŸ“¦ Container Registry & Images](k3s/02-container-registry/readme.md) - Deploying the registry and + importing images. +- [❇️ Overview Of The Application](03-the-application/readme.md) - Details of the application to be + deployed. +- [πŸš€ Deploying The Backend](04-deployment/readme.md) - Laying down the first two components and + introduction to Deployments and Pods. +- [🌐 Basic Networking](k3s/05-network-basics/readme.md) - Introducing Services to provide network + access. +- [πŸ’» Adding The Frontend](k3s/06-frontend/readme.md) - Deploying the frontend to the app and wiring + it up. +- [✨ Improving The Deployment](k3s/07-improvements/readme.md) - Recommended practices; resource + limits, probes and secrets. +- [🌎 Ingress](k3s/08-ingress/readme.md) - Finalizing the application architecture using ingress. + +All of the Kubernetes concepts & APIs explored and used are not specific to AKS, K3S or Azure. + +## 🍡 K3s Optional Sections + +These can be considered bonus sections, and are entirely optional. It is not expected that all these sections would be attempted, and they do not run in order. + +- [🀯 Scaling, Stateful Workloads & Helm](k3s/09-extra-advanced/readme.md) - Scaling (manual & auto), + stateful workloads and persitent volumes, plus more Helm. +- [🧩 Kustomize & GitOps](k3s/10-gitops-flux/readme.md) - Introduction to Kustomize and deploying apps + through GitOps with Flux. +- [πŸ‘· CI/CD with Kubernetes](/k3s/11-cicd-actions/readme.md) - How to manage CI/CD pipelines using Github + Actions. + +### πŸ“– Extra Reading & Teach Yourself Exercises + +A very brief list of potential topics and Kubernetes features you may want to look at after finishing: + +### Kubernetes Features + +- [Init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) +- [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/job/) +- [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) +- [Debugging Pods with shell access and exec](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/) +- Assigning Pods to Nodes with [selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) and [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) +- [Cluster Autoscaler in AKS](https://docs.microsoft.com/azure/aks/cluster-autoscaler) + +### Other Projects + +- Enable the [Kubernetes dashboard](https://github.com/kubernetes/dashboard) +- Enabling TLS with certificates from Let's Encrypt using [Cert Manager](https://cert-manager.io/docs/) +- Observability + - With [Prometheus](https://artifacthub.io/packages/helm/prometheus-community/prometheus) & [Grafana](https://artifacthub.io/packages/helm/grafana/grafana) + - Using [AKS monitoring add-on](https://docs.microsoft.com/azure/azure-monitor/containers/container-insights-overview) +- Using [Dapr](https://dapr.io/) for building portable and reliable microservices +- Adding a service mesh such as [Linkerd](https://linkerd.io/) or [Open Service Mesh](https://docs.microsoft.com/azure/aks/open-service-mesh-about) +- Setting up the [Application Gateway Ingress Controller (AGIC)](https://docs.microsoft.com/azure/application-gateway/ingress-controller-overview)