Skip to content

Commit

Permalink
Update readme (#57)
Browse files Browse the repository at this point in the history
Signed-off-by: Waleed Malik <[email protected]>
  • Loading branch information
ahmedwaleedmalik authored Aug 21, 2024
1 parent cc6d751 commit e7b71c8
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 10 deletions.
29 changes: 19 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,29 @@

## Overview

KubeLB is a project by Kubermatic, it is a Kubernetes native tool, responsible for centrally managing load balancers for Kubernetes clusters across multi-cloud and on-premise environments.
KubeLB is a project by Kubermatic, it is a Kubernetes native tool, responsible for centrally managing Layer 4 and 7 load balancing configurations for Kubernetes clusters across multi-cloud and on-premise environments.

### Motivation and Background

Kubernetes does not offer any implementation for load balancers and in turn relies on the in-tree or out-of-tree cloud provider implementations to take care of provisioning and managing load balancers. This means that if you are not running on a supported cloud provider, your services of type `LoadBalancer` will never be allotted a load balancer IP address. This is an obstacle for bare-metal Kubernetes environments.

There are solutions available like [MetalLB][8], [Cilium][9], etc. that solve this issue. However, these solutions are focused on a single cluster where you have to deploy the application in the same cluster where you want the load balancers. This is not ideal for multi-cluster environments since you have to configure load balancing for each cluster separately, which makes IP address management not trivial.
There are solutions available like [MetalLB][2], [Cilium][3], etc. that solve this issue. However, these solutions are focused on a single cluster where you have to deploy the application in the same cluster where you want the load balancers. This is not ideal for multi-cluster environments since you have to configure load balancing for each cluster separately, which makes IP address management not trivial.

KubeLB solves this problem by providing a centralized load balancer management solution for Kubernetes clusters across multi-cloud and on-premise environments.
For application load balancing, we have the same case where an external application like [nginx-ingress][4], [envoy gateway][5], needs to be deployed in the cluster. To further secure traffic, additional tools are required for managing DNS, TLS certificates, Web Application Firewall, etc.

KubeLB solves this problem by providing a centralized management solution that can manage the data plane for multiple Kubernetes clusters across multi-cloud and on-premise environments. This enables you to manage fleet of Kubernetes clusters in a centralized way, ensuring security compliance, enforcing policies, and providing a consistent experience for developers.

## Architecture

Please see [docs/architecture.md](./docs/architecture.md) for an overview of the KubeLB architecture.
KubeLB follows the **hub and spoke** model in which the "Management Cluster" acts as the hub and the "Tenant Clusters" act as the spokes. The information flow is from the tenant clusters to the management cluster. The agent running in the tenant cluster watches for nodes, services, ingresses, and Gateway API etc. resources and then propagates the configuration to the management cluster. The management cluster then deploys the load balancer and configures it according to the desired specification. Management cluster then uses Envoy Proxy to route traffic to the appropriate endpoints i.e. the node ports open on the nodes of the tenant cluster.

For security and isolation, the tenants have no access to any native kubernetes resources in the management cluster. The tenants can only interact with the management cluster via the KubeLB CRDs. This ensures that they are not exceeding their access level and only perform controlled operations in the management cluster.

![KubeLB Architecture](docs/kubelb-high-level-architecture.png)

## Documentation

For detailed documentation [KubeLB Docs][8].

## Installation

Expand Down Expand Up @@ -55,11 +65,10 @@ Feedback and discussion are available on [the mailing list][5].
See [the list of releases][3] to find out about feature changes.

[1]: https://github.com/kubermatic/kubelb/issues
[2]: https://github.com/kubermatic/kubelb/blob/main/CONTRIBUTING.md
[3]: https://github.com/kubermatic/kubelb/releases
[4]: https://github.com/kubermatic/kubelb/blob/main/CODE_OF_CONDUCT.md
[5]: https://groups.google.com/forum/#!forum/kubermatic-dev
[2]: https://metallb.universe.tf
[3]: https://cilium.io/use-cases/load-balancer/
[4]: https://kubernetes.github.io/ingress-nginx/
[5]: https://gateway.envoyproxy.io/
[6]: https://kubermatic.slack.com/messages/kubermatic
[7]: http://slack.kubermatic.io/
[8]: https://metallb.universe.tf
[9]: https://cilium.io/use-cases/load-balancer
[8]: https://docs.kubermatic.com/kubelb
Binary file added docs/kubelb-high-level-architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions hack/release-helm-charts.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ set -euo pipefail
cd $(dirname $0)/..
source hack/lib.sh

JOB_NAME=${JOB_NAME:-}
PROW_JOB_ID=${PROW_JOB_ID:-}
CHART_VERSION=${CHART_VERSION:-}

## When running out of CI, it's expected that the user has already configured vault
if [ -n "$JOB_NAME" ] || [ -n "$PROW_JOB_ID" ]; then
echodate "Getting secrets from Vault"
Expand Down

0 comments on commit e7b71c8

Please sign in to comment.