We have implemented a custom controller in kubernetes, which kind is Pod
using the kubebuilder
tool. This controller is listening for the Pod
event in namespace Queue
on pods with label
of router
.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: defaultLeaderElectionId,
LeaderElectionNamespace: config.Namespace,
Namespace: config.Namespace,
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
Now two instance of this controller is running, Based on lease only one is processing events, which other is idle. Now considering a scenario if the controller which was owner of the lease, failed to renew the lease, then it will crash(behavior validated). the other controller will acquire lease and will process events. Now in the transition period(when first was crashed and other was still not leader), if some events are generated they are getting lost. How can I make sure processing of event during transition period.