Skip to content

Commit

Permalink
Initial stab at Episode 5 rook stuff by hand.
Browse files Browse the repository at this point in the history
  • Loading branch information
geerlingguy committed Dec 15, 2020
1 parent e225b03 commit a376092
Show file tree
Hide file tree
Showing 10 changed files with 421 additions and 4 deletions.
4 changes: 2 additions & 2 deletions episode-04/k8s-manifests/drupal.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,10 +79,10 @@ spec:
name: drupal-files
resources:
limits:
cpu: '1'
cpu: '500m'
memory: '512Mi'
requests:
cpu: '500m'
cpu: '250m'
memory: '256Mi'
volumes:
- name: drupal-settings
Expand Down
2 changes: 1 addition & 1 deletion episode-05/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Video URL: https://www.youtube.com/watch?v=euZdS5b2siA

Outline:

- Setting up persistent files to scale Drupal's deployment
- Setting up a shared filesystem to scale Drupal's deployment
- Setting up Horizontal Pod Autoscaling
- Options for High-Availability Databases
- Pre-Christmas Q&A and Book Giveaway!
111 changes: 111 additions & 0 deletions episode-05/k8s-manifests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Drupal Kubernetes Deployment Manifests

This directory contains the same deployments from Episode 4, but with some modifications to help scale Drupal.

You can apply these manifests to any Kubernetes cluster (e.g. `minikube start` for a local cluster, or a cloud environment like Linode Kubernetes Engine).

## SEE ALSO

- NFS: https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner
- Maybe just use EFS ;)

## Configuring a shared filesystem for scalability

TODO: Rook.

First we'll deploy the Rook Ceph cluster operator, which will manage the Ceph cluster, into our Kubernetes cluster:

```
# Download the Rook codebase.
git clone --single-branch --branch release-1.5 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
# Deploy all the common Rook configuration and operator.
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# TODO - Might have to `nano operator.yaml` and change `CSI_RBD_GRPC_METRICS_PORT` to `9093` — that is for DigitalOcean...
# Watch for the operator container to be running:
kubectl get pod -n rook-ceph -w
```

Next we'll deploy a Ceph cluster into Kubernetes:

```
# See https://rook.github.io/docs/rook/v1.5/ceph-quickstart.html#cluster-environments
# Create the Ceph cluster (for Linode/cloud environments).
kubectl create -f cluster-on-pvc.yaml
# NOTE: Need to change storageclass from 'gp2' to 'linode-block-storage' in TWO places — see cluster-on-pvc.patch
# Create the Ceph cluster (for test environments like Minikube).
# kubectl create -f cluster-test.yaml
# Watch for all the other pods to be running:
kubectl get pod -n rook-ceph -w
# NOTE: This process can take 5-10 minutes (has to provision block storage, format it, configure it for the CephFS cluster, etc.)
```

Use Rook's toolbox to check on the cluster's health:

```
# Deploy the Interactive toolbox (in this repo):
kubectl create -f rook/toolbox.yml
# Wait for the toolbox to be deployed:
kubectl -n rook-ceph rollout status deploy/rook-ceph-tools
# Once deployed, log into the toolbox with:
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
# Check the status inside the toolbox:
ceph status
(Should see 'HEALTH_OK').
```

Create a Ceph filesystem:

```
# Deploy a Ceph filesystem (in this repo):
kubectl create -f rook/filesystem.yml
# Wait for the pods to be Running:
kubectl -n rook-ceph get pod -l app=rook-ceph-mds
```

Create a StorageClass for Ceph:

```
# Create a CephFS StorageClass (in this repo):
kubectl create -f rook/storageclass.yml
```

## Configure Drupal to use a CephFS PVC

TODO.

```
# Create a namespace for the Drupal site.
kubectl create namespace drupal
# Create the MySQL (MariaDB) and Drupal (Apache + PHP) Deployments.
kubectl apply -f mariadb.yml -f drupal.yml
# Watch the status of the deployment.
kubectl get deployments -n drupal -w
```

TODO.

CURRENTLY:

- `kubectl describe pvc -n drupal drupal-files-pvc` never shows it get provisioned
- `kubectl logs -n rook-ceph -f csi-cephfsplugin-provisioner-7dc78747bf-8gxhh --all-containers --max-log-requests 6` shows that after a timeout period, it just gets stuck in a loop :(
- See: https://github.com/rook/rook/issues/6183#issuecomment-745060072

TODO: More info here — https://www.digitalocean.com/community/tutorials/how-to-set-up-a-ceph-cluster-within-kubernetes-using-rook

## Configuring Horizontal Pod Autoscaling

TODO.
108 changes: 108 additions & 0 deletions episode-05/k8s-manifests/drupal.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: drupal-config
namespace: drupal
data:
# Note: This is NOT secure. Don't use this in production!
settings.php: |-
<?php
$databases['default']['default'] = [
'database' => 'drupal',
'username' => 'drupal',
'password' => 'drupal',
'prefix' => '',
'host' => 'mariadb',
'port' => '3306',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
];
$settings['hash_salt'] = 'OTk4MTYzYWI4N2E2MGIxNjlmYmQ2MTA4';
$settings['trusted_host_patterns'] = ['^.+$'];
$settings['config_sync_directory'] = 'sites/default/files/config_OTk4MTYzY';
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: drupal-files-pvc
namespace: drupal
spec:
accessModes:
- ReadWriteMany # Was ReadWriteOnce before!
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs # This is new!

---
kind: Deployment
apiVersion: apps/v1
metadata:
name: drupal
namespace: drupal
spec:
replicas: 1
selector:
matchLabels:
app: drupal
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-files
image: 'drupal:9-apache'
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/default/files /data; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: drupal-files
containers:
- name: drupal
image: 'drupal:9-apache'
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 10
volumeMounts:
- mountPath: /var/www/html/sites/default/
name: drupal-settings
- mountPath: /var/www/html/sites/default/files/
name: drupal-files
resources:
limits:
cpu: '500m'
memory: '512Mi'
requests:
cpu: '250m'
memory: '256Mi'
volumes:
- name: drupal-settings
configMap:
name: drupal-config
- name: drupal-files
persistentVolumeClaim:
claimName: drupal-files-pvc

---
kind: Service
apiVersion: v1
metadata:
name: drupal
namespace: drupal
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: drupal
70 changes: 70 additions & 0 deletions episode-05/k8s-manifests/mariadb.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mariadb-pvc
namespace: drupal
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

---
kind: Deployment
apiVersion: apps/v1
metadata:
name: mariadb
namespace: drupal
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:10.5
ports:
- containerPort: 3306
env:
- name: MYSQL_DATABASE
value: drupal
- name: MYSQL_USER
value: drupal
- name: MYSQL_PASSWORD
value: drupal
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: 'yes'
volumeMounts:
- mountPath: /var/lib/mysql/
name: database
resources:
limits:
cpu: '2'
memory: '512Mi'
requests:
cpu: '500m'
memory: '256Mi'
volumes:
- name: database
persistentVolumeClaim:
claimName: mariadb-pvc

---
kind: Service
apiVersion: v1
metadata:
name: mariadb
namespace: drupal
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mariadb
22 changes: 22 additions & 0 deletions episode-05/k8s-manifests/rook/cluster-on-pvc.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
diff --git a/cluster/examples/kubernetes/ceph/cluster-on-pvc.yaml b/cluster/examples/kubernetes/ceph/cluster-on-pvc.yaml
index 599d580e..36bc5cea 100644
--- a/cluster/examples/kubernetes/ceph/cluster-on-pvc.yaml
+++ b/cluster/examples/kubernetes/ceph/cluster-on-pvc.yaml
@@ -28,7 +28,7 @@ spec:
# size appropriate for monitor data will be used.
volumeClaimTemplate:
spec:
- storageClassName: gp2
+ storageClassName: linode-block-storage
resources:
requests:
storage: 10Gi
@@ -125,7 +125,7 @@ spec:
requests:
storage: 10Gi
# IMPORTANT: Change the storage class depending on your environment (e.g. local-storage, gp2)
- storageClassName: gp2
+ storageClassName: linode-block-storage
volumeMode: Block
accessModes:
- ReadWriteOnce
18 changes: 18 additions & 0 deletions episode-05/k8s-manifests/rook/filesystem.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
# See: https://rook.github.io/docs/rook/v1.5/ceph-filesystem.html#create-the-filesystem
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: myfs
namespace: rook-ceph
spec:
metadataPool:
replicated:
size: 3
dataPools:
- replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
32 changes: 32 additions & 0 deletions episode-05/k8s-manifests/rook/storageclass.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where operator is deployed.
clusterID: rook-ceph

# CephFS filesystem name into which the volume shall be created
fsName: myfs

# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: myfs-data0

# Root path of an existing CephFS volume
# Required for provisionVolume: "false"
# rootPath: /absolute/path

# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph

reclaimPolicy: Delete
Loading

0 comments on commit a376092

Please sign in to comment.