Some assumptions I'm making...
- This is intended to be run on a single-node cluster (i.e., master and worker on same node)
- These scripts are intended to be executed on the single node itself (you'll need sudo access)
- This will only run on amd64 hardware
- This will not "just work" out of the box
- The database server for some applications (e.g., Gitea, Miniflux, etc...) is external to all of this and needs to be created
- DNS needs to be setup
Start by cloning the repo, editing the .env
file, and bootstrapping the cluster (installing K3s, Helm, etc...).
- You've created a GitHub personal access token with the following permissions:
- all under
repo
- all under
admin:public_key
- all under
Starting by cloning the repo, editing a few variables, and then installing K3s.
git clone https://github.com/loganmarchione/k8s_homelab.git
cd k8s_homelab/scripts
cp -p .env_sample .env
vim .env
#MAKE YOUR CHANGES IN THE .env FILE
./01-setupMasterNode.sh
At this point, you should be able to run the commands below. If so, K3s is up and running!
export KUBECONFIG=$HOME/.kube/config
kubectl get nodes -o wide
You can find your kubeconfig file and copy/paste it to your local workstation for accessing your cluster remotely.
cat $HOME/.kube/config
Now, create a series of secrets.
Keep in mind that these secrets will be in your shell history and clipboard (you should clear both). Obviously replace the secrets (don't copy/paste directly).
kubectl create secret generic cluster-secret-vars \
--namespace=flux-system \
--from-literal=SECRET_INTERNAL_DOMAIN_NAME=your.domain.com \
[email protected] \
--from-literal=SECRET_AWS_REGION=region-here-1 \
--from-literal=SECRET_AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE \
--from-literal=FOCALBOARD_DB_HOST=hostname \
--from-literal=FOCALBOARD_DB_USER=admin \
--from-literal=FOCALBOARD_DB_PASS=super_secret_password_goes_here \
--from-literal=FOCALBOARD_DB_NAME=DBName
kubectl create secret generic cluster-user-auth \
--namespace flux-system \
--from-literal=username=admin \
--from-literal=password='bcrypt_password_hash_goes_here'
kubectl create secret generic traefik-secret-vars \
--namespace=kube-system \
--type=kubernetes.io/basic-auth \
--from-literal=username=admin \
--from-literal=password=super_secret_password_goes_here
kubectl create secret generic letsencrypt-secret-vars \
--namespace=cert-manager \
--from-literal=SECRET_AWS_ACCESS_KEY=wJalrXUtnFEMIKK7MDENGKbPxRfiCYEXAMPLEKEY
kubectl create secret generic pgadmin-secret-vars \
--namespace=pgadmin4 \
[email protected] \
--from-literal=PGADMIN_DEFAULT_PASSWORD=super_secret_password_goes_here
kubectl create secret generic miniflux-secret-vars \
--namespace=miniflux \
--from-literal=DATABASE_URL='postgres://db_user:[email protected]:5432/db_name?sslmode=verify-full' \
--from-literal=ADMIN_USERNAME=admin \
--from-literal=ADMIN_PASSWORD=super_secret_password_goes_here
kubectl create secret generic webdav-secret-vars \
--namespace=webdav \
--from-literal=WEBDAV_USER=admin \
--from-literal=WEBDAV_PASS=super_secret_password_goes_here
kubectl create secret generic joplin-secret-vars \
--namespace=joplin \
--from-literal=POSTGRES_CONNECTION_STRING='postgresql://db_user:[email protected]:5432/db_name?sslmode=verify-full'
kubectl create secret generic jqplay-secret-vars \
--namespace=tools \
--from-literal=DATABASE_URL='postgres://db_user:[email protected]:5432/db_name?sslmode=require'
Verify the secrets were created.
kubectl get secret --all-namespaces
Bootstrap Flux (this will install Flux and everything else).
./02-flux.sh
Flux bootstraps in the order below (based off of the dependencies I've setup).
flowchart TD;
A["flux-system (core of flux)"]-->B["namespaces"]
B-->C["charts (3rd party charts)"];
C-->D["crds (custom resource definitions)"];
D-->E[infrastructure];
E-->F[apps];
Wait a few seconds, then run the command below (it will take a few minutes for everything to show True
).
kubectl get kustomization -n flux-system
If you need to give it a kick in the ass, use this.
flux reconcile source git flux-system
There is a custom storage class called local-path-customized
based on Rancher's local-path-provisioner. It has the following additions:
reclaimPolicy: Retain
allowVolumeExpansion: true
You can view the storage class below.
kubectl get storageclass
The location of the files are /var/lib/rancher/k3s/storage
ls -la /var/lib/rancher/k3s/storage
After a few minutes, make sure that Let's Encrypt registered a ClusterIssuer
and secret
for both production
and staging
.
kubectl get clusterissuer -n cert-manager
kubectl get secret -n cert-manager
A few minutes, you should see certificates appear (it will take up to five minutes for everything to show True
).
kubectl get certificate --all-namespaces
If the certificates are not issuing, use the commands below to troubleshoot.
kubectl get certificaterequest --all-namespaces
kubectl get order --all-namespaces