You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is my evolving brain-dump on how to support Bridge to Kubernetes with Open Service Mesh (OSM). I assume it'll work similar with other service meshes.
Sidecar Injection
OSM automatically injects an envoy sidecar into all containers if the namespace is having OSM support enabled. The first issue with this is that both, BridgeToKubernetes and OSM, use the same name "envoy" for their containers. This obviously leads to conflicts with results in the pods not getting created. The fix for this is fairly easy: Just rename the container in BridgeToKubernetes from "envoy" to e.g. "bridge-envoy" in RoutingStateEstablisher.cs.
In addition, multiple envoys seem to be conflicting with each other. A solution is to add the --base-id parameter to the envoy invocation and pass a value different than "0". I assume, using --use-dynamic-base-id instead of a hardcoded base-id is a better idea, I'll need to test that. Anyway, the code change is simply adding e.g. --base-id 3 to the args in RoutingStateEstablisher.cs.
Envoy Routing
Next issue was the envoy routing configuration. Turns out, with OSM you'll need to send the correct "Host" header for request to make it to the destination service. Fixing this is again fairly easy:
Change the cluster type from static_dns to logical_dns
At this point, traffic that is routed to the original service works fine, all with injected OSM sidecars! One issue remains though.
Routing to Pod
When BridgeToKubernetes matches the given routing header, it tries to forward traffic directly to the routing pod (guess there's a better name for that. It's the one that runs the lpkremoteagent image). It does that by directly connecting to the pod's IP. This unfortunately doesn't work with OSM. Instead, a service needs to be created that exposes this service and the envoy config needs to be adjusted once more:
Change the cluster type from static to logical_dns
Add auto_host_rewrite: true to the route
These are the same steps taken above to route the traffic to the original service, only now for the routing pod/service.
As far as I can tell, YES! Traffic to both the original service as well as the local machine works as expected!
Next Steps
Proof of Concept
Get Feedback from BridgeToKubernetes contributor
Adjust Code to
Change name of pod from "envoy" to "bridge-envoy"
Change Envoy configuration to set the Host header correctly
Create a service for the lpkremoteagent pod
Change Envoy configuration to route to service instead of Pod IP
Testing
I'd like to get feedback on my approach and potential implications from Microsoft. In my opinion, those changes will work in clusters with or without service mesh. If we're lucky, the change even works generically with all service meshes, not just Open Service Mesh (OSM). I currently don't see any reason why it wouldn't. This would greatly increase the quality of BridgeToKubernetes and allow it to be used in more clusters.
The text was updated successfully, but these errors were encountered:
This is my evolving brain-dump on how to support Bridge to Kubernetes with Open Service Mesh (OSM). I assume it'll work similar with other service meshes.
Sidecar Injection
OSM automatically injects an envoy sidecar into all containers if the namespace is having OSM support enabled. The first issue with this is that both, BridgeToKubernetes and OSM, use the same name "envoy" for their containers. This obviously leads to conflicts with results in the pods not getting created. The fix for this is fairly easy: Just rename the container in BridgeToKubernetes from "envoy" to e.g. "bridge-envoy" in RoutingStateEstablisher.cs.
In addition, multiple envoys seem to be conflicting with each other. A solution is to add the
--base-id
parameter to the envoy invocation and pass a value different than "0". I assume, using--use-dynamic-base-id
instead of a hardcoded base-id is a better idea, I'll need to test that. Anyway, the code change is simply adding e.g.--base-id 3
to the args in RoutingStateEstablisher.cs.Envoy Routing
Next issue was the envoy routing configuration. Turns out, with OSM you'll need to send the correct "Host" header for request to make it to the destination service. Fixing this is again fairly easy:
cluster
type fromstatic_dns
tological_dns
auto_host_rewrite: true
to theroute
An example might look like this:
Cluster:
Route:
At this point, traffic that is routed to the original service works fine, all with injected OSM sidecars! One issue remains though.
Routing to Pod
When BridgeToKubernetes matches the given routing header, it tries to forward traffic directly to the routing pod (guess there's a better name for that. It's the one that runs the
lpkremoteagent
image). It does that by directly connecting to the pod's IP. This unfortunately doesn't work with OSM. Instead, a service needs to be created that exposes this service and the envoy config needs to be adjusted once more:cluster
type fromstatic
tological_dns
auto_host_rewrite: true
to theroute
These are the same steps taken above to route the traffic to the original service, only now for the routing pod/service.
Cluster:
Route:
It Works?
As far as I can tell, YES! Traffic to both the original service as well as the local machine works as expected!
Next Steps
I'd like to get feedback on my approach and potential implications from Microsoft. In my opinion, those changes will work in clusters with or without service mesh. If we're lucky, the change even works generically with all service meshes, not just Open Service Mesh (OSM). I currently don't see any reason why it wouldn't. This would greatly increase the quality of BridgeToKubernetes and allow it to be used in more clusters.
The text was updated successfully, but these errors were encountered: