I have 3 GKE clusters sitting in 3 different regions on Google Cloud Platform. I would like to create a Kafka cluster which has one Zookeeper and one Kafka node (broker) in every region (each GKE cluster).
This set-up is intended to survive regional failure (I know a whole GCP region going down is rare and highly unlikely).
I am trying this set-up using this Helm Chart provided by Incubator.
I tried this setup manually on 3 GCP VMs following this guide and I was able to do it without any issues.
However, setting up a Kafka cluster on Kubernetes seems complicated.
As we know we have to provide the IPs of all the zookeeper server in each zookeeper configuration file like below:
...
# list of servers
server.1=0.0.0.0:2888:3888
server.2=<Ip of second server>:2888:3888
server.3=<ip of third server>:2888:3888
...
As I can see in the Helm chart config-script.yaml file has a script which creates the Zookeeper configuration file for every deployment.
The part of the script which echos the zookeeper servers looks something like below:
...
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE
done
...
As of now the configuration that this Helm chart creates has the below Zookeeper server in the configuration with one replica (replica here means Kubernetes Pods replicas
).
...
# "release-name" is the name of the Helm release
server.1=release-name-zookeeper-0.release-name-zookeeper-headless.default.svc.cluster.local:2888:3888
...
At this point, I am clueless and do not know what to do, so that all the Zookeeper servers get included in the configuration file?
How shall I modify the script?