This repository contains all helm charts for the cloud.
- git-crypt: Local and transparent encryption of secrets
- Helm : Management tool for Helm Charts
- Helmfile: Management tool for Helm
- kubectl: The swiss army knife for cluster management
- helm-diff: Helpful diff plugin for helm
helm plugin install https://github.com/databus23/helm-diff
- k9s: Kubernetes CLI that helps managing the cluster
- npm: Package manager for JS, used for git linter
If you have the rights to read secrets, please send your public gpg key to one of your admins:
# Optional: Create a new key
gpg --gen-key
# Find your key id
gpg --list-keys
# Export it
gpg --armor --export $YOUR_KEY_ID > public.asc
With kubectl you are able to interact with the cloud. When using the OTC do the following steps:
- Login under https://console.otc.t-systems.com/
- Select the project you want to access
- Select Cloud Container Engine and then the Cluster
- Click on Command Line Tool and follow the instructions
Optional:
In order to change between the projects with kubectl it is necessary to follow the steps above for each project and manually put the received config files together. It's a bit tedious, therefore a template is given here how the file should look (the left out data should be in the downloaded files. Be aware that it is necessary to change the user name for each cluster, per default its just user)
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <Certificate data of first external cluster>
server: https://<Some_External_IP>
name: devExternalCluster
- cluster:
certificate-authority-data: <Certificate data of first internal cluster>
server: https://<Some_Internal_IP>
name: devInternalCluster
- cluster: <Repeat for each cluster>
contexts:
- context:
cluster: devExternalCluster
user: <Username_First_Cluster>
name: dev-external
- context:
cluster: devInternalCluster
user: <Username_First_Cluster>
name: dev-internal
- context: <Repeat for each cluster>
current-context: tools-external
kind: Config
preferences: {}
users:
- name: <Username_First_Cluster>
user:
client-certificate-data: <Client Certificate Data for first user>
client-key-data: <Client Key Data for first user>
- name: <Repeat for each user>
Alternate quick version, requires terraform privileges:
- in terraform project=stage directory, execute
terraform output cce_config
- if this does not give an output
- add the following lines to [projectname]/main.tf:
output cce_config { value = module.cce.kubectl_config sensitive = true }
- save main.tf
- in project directory, execute
terraform refresh terraform output cce_config
- copy lines between "EOT"s from cli output into new file ~/.kube/config in case it is your first cluster, or config_cluster_2... for following clusters
- to merge multiple cluster configuration files, follow this pattern:
cp ~/.kube/config ~/.kube/config.bak
KUBECONFIG=~/.kube/config:~/.kube/config_cluster_2 kubectl config view --flatten > /tmp/config
mv /tmp/config ~/.kube/config
- verify by:
kubectl config view
kubectl config use-context <some context from output above, e.g. zbw-cloud-infrastructure-dev>
k9s
Please install the npm package git-commit-msg-linter via
npm install
Format the commits along the guidelines the linter defines.
You should only run changes on staging or production from the command line if you know what you're doing. The dev cluster can be used for quick 'n' dirty tests while the rest should be handled by the CI/CD pipeline.
Changes in Charts should always come with a version upgrade in Chart.yml.
Establish access to the Vault service (see the terraform repository for more information). Then:
source scripts/setSecrets.sh
getSecrets <STAGE_NAME>
To see what changes will be applied, install Helm's diff plugin if not done yet:
helm plugin install https://github.com/databus23/helm-diff
Then:
helmfile -e dev-nl -f helmfile-dev-nl.yaml diff --strip-trailing-cr
Before syncing, you may change the version of the Helmchart to make rollbacks easier.
Then:
helmfile -e <STAGE> -f helmfile-<STAGE>.yaml sync
For each deployment stage a different JWT role should be created in Vault. This role has a policy declaring which paths in Vault can be read. When the Gitlab Pipeline runs it has a CI JWT for the current project that can then be used to authenticate with Vault.
The gitlab pipeline command requesting the token:
export VAULT_TOKEN="$(vault write -field=token auth/jwt-cloud2/login role=$VAULT_ROLE_ID jwt=$VAULT_ID_TOKEN)"
Therefore two things have to be established beforehand in Vault. The policy and the role, and the reference to gitlab for the jwt:
The policy can be defined in the UI or command line:
vault policy write ci-dev-nl ci-dev-nl.hcl
where ci-dev-nl.hcl
contains:
path "secret/data/otc-credentials/*" { capabilities = ["read"]}
path "secret/data/stages/dev-nl/*" { capabilities = ["read"]}
path "secret/data/helmcharts/*" { capabilities = ["read"]}
path "secret/data/gitlab/*" { capabilities = ["read"]}
The role can only be defined via the command line:
vault write auth/jwt-cloud2/role/ci-helm-dev-nl - <<EOF
{
"role_type": "jwt",
"policies": ["ci-dev-nl"],
"token_explicit_max_ttl": 60,
"user_claim": "user_email",
"bound_claims": {
"project_id": "33"
}
}
EOF
project_id
references the project id in gitlab
Configure jwt auth method according to https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/
vault write auth/jwt-cloud2/config \
jwks_url="https://gitlab.lori-cloud.zbw.eu/oauth/discovery/keys" \
bound_issuer="https://gitlab.lori-cloud.zbw.eu"
or write these two entries without parentheses in the UI under Access / Auth Methods / jwt / configure