I'm using AKS to execute multiple load tests agents in parallel against an external application. The agents are pods without services associated to them. The cluster has no Ingress controller.
I want the target application to be hit from multiple IPs, so I created the AKS cluster with Terraform this way:
resource "azurerm_kubernetes_cluster" "cluster" {
...
...
...
network_plugin = "azure"
network_profile {
load_balancer_profile {
managed_outbound_ip_count = 3
}
}
}
With that configuration, I have indeed 3 public IPs associated to the cluster outbound loadbalancer, but only one public IP appears in the target application access logs.
Is there a way to force the use of all the available egress public IP?