Skip to content

Commit

Permalink
Merge pull request #180 from oakestra/179-outdated-readme-including-b…
Browse files Browse the repository at this point in the history
…uildsetup-instructions

179 outdated readme including buildsetup instructions
  • Loading branch information
giobart authored Jul 11, 2024
2 parents bf62b02 + e8bd748 commit 7eae759
Show file tree
Hide file tree
Showing 10 changed files with 336 additions and 271 deletions.
85 changes: 45 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,64 @@
# Oakestra Networking Component
![net manager tests](https://github.com/oakestra/oakestra-net/actions/workflows/node_net_manager_tests.yml/badge.svg)
![net manager artifacts](https://github.com/oakestra/oakestra-net/actions/workflows/node-net-manager-artifacts.yml/badge.svg)
![root artifacts](https://github.com/oakestra/oakestra-net/actions/workflows/root_service_manager_image.yml/badge.svg)
![cluster artifacts](https://github.com/oakestra/oakestra-net/actions/workflows/cluster_service_manager_image.yml/badge.svg)

This is the networking component that enables interactions between the microservices deployed in Oakestra.
The networking component resembles the multi-layer architecture of Oakestra with the following components:
[![Stable](https://img.shields.io/badge/Latest%20Stable-%F0%9F%AA%97%20Accordion%20v0.4.301-green.svg)](https://github.com/oakestra/oakestra-net/tree/v0.4.301)
[![Github All Releases](https://img.shields.io/github/downloads/oakestra/oakestra-net/total.svg)]()

- Root service manager: register the cluster service manager and generates the subnetwork for each worker and cluster belonging to the infrastructure.
- Cluster service manager: this is the direct interface towards the nodes. This resolves the addresses required by each node.
- NetManager: It's deployed on each node. It's responsible for the maintenance of a dynamic overlay network connecting the nodes.
# Oakestra Net 🕸️🌳🕸️
This component enables the communication between services distributed across multiple [Oakestra](oakestra.io) nodes and clsuters.

This networking component creates a semantic addressing space where the IP addresses not only represent the final destination for a packet
This repository includes:

- **Net Manager**: The network daemon that needs to be installed on each Worker Node. This captures the services traffic, and creates the semantic overlay abstraction. See [Semantic Addressing](https://www.oakestra.io/docs/networking/semantic-addressing) for details.

- **Root/Cluster Service Managers**: Control plane components installed alongside Oakestra root and cluster orchestrators. They propagate and install the routes to the Net Manager components.

>This networking component creates a semantic addressing space where the IP addresses not only represent the final destination for a packet
but also enforces a balancing policy.

## Prerequisites
## How to install the Net Manager daemon

### From official build

- Linux OS with
- iptable
- ip util
- port 10010
Follow the offical Oakestra [Get Started](https://github.com/oakestra/oakestra?tab=readme-ov-file#your-first-worker-node-🍃) guide to install the stable NetManager alongside oakestra worker node.

## Installation
### Build it on your own
Go inside the folder `node-net-manager/build` and run:
```
./build.sh
```

Download the NetManager package, install it using `./install.sh <architecture>` and then execute it using `sudo NetManager`
Then move the binary corresponding to your architecture to the current folder:
```
cp bin/<architecture>-NetManager .
```
> <architecture> is either arm-7 or amd64
## Semantic addressing (ServiceIPs)
Finally, install it using
`./install.sh`

A semantic address enforces a balancing policy towards all the instances of a service.
## Run the NetManager daemon

- RR_IP (Currently implemented): IP address pointing every time to a random instance of a service.
- Closest_IP (Under implementation): IP address pointing to the closest instance of a service.
Configure the Network Manager by editing `/etc/netmanager/netcfg.json` as follows:

Example: Given a service A with 2 instances A.a and A.b
- A has 2 ServiceIPs, a RR_IP and a Closest_IP.
- A.a has an instance IP representing uniquely this instance.
- A.b has another instance IP representing uniquely this instance.
- If an instance of a service B uses the RR_IP of A, the traffic is balanced request after request toward A.a or A.b
```json
{
"NodePublicAddress": "<IP ADDRESS OF THIS DEVICE>",
"NodePublicPort": "<PORT REACHABLE FROM OUTSIDE, use 50103 as default>",
"ClusterUrl": "<IP Address of cluster orchestrator or 0.0.0.0 if deployed on the same machine>",
"ClusterMqttPort": "10003"
}
```

The implementation happens at level 4, therefore as of now all the protocols absed on top of TCP and UDP are supported.

## Subnetworks
Then start the NetManager
```
sudo NetManager
```

An overlay that spans seamlessly across the platform is only possible if each node has an internal sub-network that can be used to allocate an address for each newly deployed service. When a new node is attached to Oakestra, a new subnetwork from the original addressing space is generated. All the services belonging to that node will have private namespace addresses belonging to that subnetwork.
As of now the network 10.16.0.0/12 represents the entire Oakestra platform. From this base address each cluster contains subnetworks with a netmask of 26 bits that are assigned to the nodes. Each worker can then assign namespace ip addresses using the last 6 bits of the address. A namespace ip is yeat another address assigned to each instance only within the node boundaries. The address 10.30.0.0/16 is reserved to the ServiceIPs.
This network cut enables up to ≈ 15.360 worker nodes. Each worker can instantiate ≈ 62 containers, considering the address reserved internally for the networking components.

## Packet proxying

The component that decides which is the recipient worker node for each packet is the ProxyTUN. This component is implemented as an L4 proxy which analyzes the incoming traffic, changes the source and destination address, and forwards it to the overlay network.
A packet approaching the proxy has a namespace IP as the source address and an IP belonging to the subnetwork of the Service and Instance IPs as a destination.
The L4 packet also has a couple of source and destination ports used to maintain a connection and contact the correct application on both sides. The proxy’s job is to substitute the source and destination addresses according to the routing policy expressed by the destination address.
The proxy converts the namespace address of the packet, belonging to the local network of the node, with the InstanceIP of that service’s instance.
This conversion enables the receiver to route the response back to the service instance deployed inside the sender’s node.
If the original destination address is an InstanceIP, the conversion is straightforward using the information available in the proxy’s cache. When the original destination address is a ServiceIP, the following four steps are executed:

- Fetch the routing policy
- Fetch the service instances
- Choose one instance using the logic associated with the routing policy
- Replace the ServiceIP with the namespace address of the resulting instance.

After the correct translation of source and destination addresses, the packet is encapsulated and sent to the tunnel only if the destination belongs to another node, or it is just sent back down to the bridge if the destination is in the same node.
153 changes: 2 additions & 151 deletions node-net-manager/NetManager.go
Original file line number Diff line number Diff line change
@@ -1,158 +1,9 @@
package main

import (
"NetManager/env"
"NetManager/handlers"
"NetManager/logger"
"NetManager/mqtt"
"NetManager/network"
"NetManager/proxy"
"encoding/json"
"flag"
"fmt"
"io"
"log"
"net/http"

"github.com/gorilla/mux"
"github.com/tkanos/gonfig"
"NetManager/cmd"
)

type undeployRequest struct {
Servicename string `json:"serviceName"`
Instancenumber int `json:"instanceNumber"`
}

type registerRequest struct {
ClientID string `json:"client_id"`
}

type DeployResponse struct {
ServiceName string `json:"serviceName"`
NsAddress string `json:"nsAddress"`
}

type netConfiguration struct {
NodePublicAddress string
NodePublicPort string
ClusterUrl string
ClusterMqttPort string
}

func handleRequests(port int) {
netRouter := mux.NewRouter().StrictSlash(true)
netRouter.HandleFunc("/register", register).Methods("POST")
netRouter.HandleFunc("/docker/deploy", dockerDeploy).Methods("POST")

handlers.RegisterAllManagers(&Env, &WorkerID, Configuration.NodePublicAddress, Configuration.NodePublicPort, netRouter)
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", port), netRouter))
}

var (
Env env.Environment
Proxy proxy.GoProxyTunnel
WorkerID string
Configuration netConfiguration
)

/*
DEPRECATED
Endpoint: /docker/deploy
Usage: used to assign a network to a docker container. This method can be used only after the registration
Method: POST
Request Json:
{
containerId:string #name of the container or containerid
appName:string
instanceNumber:int
}
Response Json:
{
serviceName: string
nsAddress: string # address assigned to this container
}
*/
func dockerDeploy(writer http.ResponseWriter, request *http.Request) {
log.Println("Received HTTP request - /docker/deploy ")
writer.WriteHeader(299)
_, _ = writer.Write([]byte("DEPRECATED API"))
}

/*
Endpoint: /register
Usage: used to initialize the Network manager. The network manager must know his local subnetwork.
Method: POST
Request Json:
{
client_id:string # id of the worker node
}
Response: 200 or Failure code
*/
func register(writer http.ResponseWriter, request *http.Request) {
log.Println("Received HTTP request - /register ")

reqBody, _ := io.ReadAll(request.Body)
var requestStruct registerRequest
err := json.Unmarshal(reqBody, &requestStruct)
if err != nil {
writer.WriteHeader(http.StatusBadRequest)
}
log.Println(requestStruct)

// drop the request if the node is already initialized
if WorkerID != "" {
if WorkerID == requestStruct.ClientID {
log.Printf("Node already initialized")
writer.WriteHeader(http.StatusOK)
} else {
log.Printf("Attempting to re-initialize a node with a different worker ID")
writer.WriteHeader(http.StatusBadRequest)
}
return
}

WorkerID = requestStruct.ClientID

// initialize mqtt connection to the broker
mqtt.InitNetMqttClient(requestStruct.ClientID, Configuration.ClusterUrl, Configuration.ClusterMqttPort)

// initialize the proxy tunnel
Proxy = proxy.New()
Proxy.Listen()

// initialize the Env Manager
Env = *env.NewEnvironmentClusterConfigured(Proxy.HostTUNDeviceName)

Proxy.SetEnvironment(&Env)

writer.WriteHeader(http.StatusOK)
}

func main() {
cfgFile := flag.String("cfg", "/etc/netmanager/netcfg.json", "Set a cluster IP")
localPort := flag.Int("p", 6000, "Default local port of the NetManager")
debugMode := flag.Bool("D", false, "Debug mode, it enables debug-level logs")
flag.Parse()

err := gonfig.GetConf(*cfgFile, &Configuration)
if err != nil {
log.Fatal(err)
}

if *debugMode {
logger.SetDebugMode()
}

log.Print(Configuration)

network.IptableFlushAll()

log.Println("NetManager started. Waiting for registration.")
handleRequests(*localPort)
cmd.Execute()
}
48 changes: 20 additions & 28 deletions node-net-manager/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,26 +15,34 @@ The Network manager is divided in 4 main components:
.
├── build/
│ Description:
Binary executable compiled files and build scripts
Build and instalaltion scripts
├── config/
│ Description:
│ Configuration files used by the environment maanger and the proxyTunnel. These configuration files are used only
│ for testing purpose to create a local environment without the need of plugging the compennt to the local orchestrator.
│ Configuration files used by the environment maanger and the proxyTunnel.
├── env/
│ Description:
│ The environment manager implementation resides here.
├── proxy/
│ Description:
│ This is where the ProxyTunnel implmentation belongs
├── testEnvironment/
│ Description:
│ Executable files that can be used to test the Netowrk Manager locally.
├── mqtt/
│ Description:
│ Mqtt interface witht he cluster service manager
├── install.sh
│ Mqtt client implementation for cluster service manager routes resolution and subnetwork management.
├── cmd/
│ Description:
│ CLI commands
├── handlers/
│ Description:
│ dispatching methods for container or unikernel network management
├── server/
│ Description:
│ installation script
│ http REST server for incoming requests from NodeEngine
├── logger/
│ Description:
│ implementation of the NetManager logging framework
├── utils/
│ Description:
│ Just utility code
└── NetManager.go
Description:
Entry point to startup the NetworkManager
Expand All @@ -43,8 +51,9 @@ The Network manager is divided in 4 main components:

# Installation

- download the latest release from [here](https://github.com/edgeIO/edgeionet/releases)
- run `./install.sh <architecture>` specifying amd64 or arm-7
- Navigate the `build` directory and use `./build.sh`
- Move the binary to current folder based on required architecture. E.g., `mv bin/amd64-NetManager NetManager` for amd64
- Then install it using `./install.sh`

# Run NetManager

Expand Down Expand Up @@ -107,20 +116,3 @@ Address where all the containers of this node belong. Each new container will ha

###Prohibited port numbers
Right now a deployed service can't use the same port as the proxy tunnel


## Deployment
Note, most of the following must still be implemented

### With binary files

Execute the binary files directly specifying the Cluster address. This will trigger the registration procedure.
`sudo ./bin/Networkmanager -cluster <ip-address>`

### With go commandline

* go 1.12+ required
* run the setup.sh to install the dependencies on each machine

Execute the Network manager with
`sudo go run NetManager.go -cluster <ip-address>`
3 changes: 0 additions & 3 deletions node-net-manager/build/install.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
echo "Usage ./install.sh <architecture>"
echo "supported architectures: amd64, arm-7"

sudo cp ./NetManager /bin/NetManager
sudo mkdir /etc/netmanager
sudo cp tuncfg.json /etc/netmanager/tuncfg.json
Expand Down
Loading

0 comments on commit 7eae759

Please sign in to comment.