2

I'm using AKS to execute multiple load tests agents in parallel against an external application. The agents are pods without services associated to them. The cluster has no Ingress controller.

I want the target application to be hit from multiple IPs, so I created the AKS cluster with Terraform this way:

resource "azurerm_kubernetes_cluster" "cluster" {
  ...
  ...
  ...
  network_plugin = "azure"
  network_profile {
    load_balancer_profile {
      managed_outbound_ip_count = 3
    }
  }
}

With that configuration, I have indeed 3 public IPs associated to the cluster outbound loadbalancer, but only one public IP appears in the target application access logs.

Is there a way to force the use of all the available egress public IP?

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
blakelead
  • 1,780
  • 2
  • 17
  • 28

1 Answers1

1

If you don't have Services deployed then you are using your LB not as a Kubernetes LoadBalancer, but merely as an SNAT proxy.
Probably in this case the LB defaults to using its primary ip for SNAT rules.

AKS docs clearly mention Services when talking about outbound pool capability.

Deploy the public service <...>. The Azure Load Balancer will be configured with a new public IP that will front this new service. Since the Azure Load Balancer can have multiple Frontend IPs, each new service deployed will get a new dedicated frontend IP to be uniquely accessed.
https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard#use-the-public-standard-load-balancer

Try to create Services (one per external ip) as advised in the doc.

Olesya Bolobova
  • 1,573
  • 1
  • 10
  • 21
  • You are correct. It seems that setting `managed_outbound_ip_count` does not help my use case. Thank you – blakelead Nov 24 '20 at 11:03