5

I am testing KAFKA(2.11_2.1.0) broker failover.

I have 3 nodes:

node0:

broker.id=0 
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092 

node1:

broker.id=1
listeners=PLAINTEXT://localhost:9093
advertised.listeners=PLAINTEXT://localhost:9093

node2:

broker.id=2
listeners=PLAINTEXT://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9094

I created one topic with 3 partitions and 3 replicas. I created one producer with config:

properties.put("bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, MessageSerializer.class.getName());
properties.put(ProducerConfig.ACKS_CONFIG, "1");
properties.put(ProducerConfig.RETRIES_CONFIG, "3");
properties.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, MyPartitioner.class.getName());

I created 3 consumers for topic with config:

Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092,localhost:9093,localhost:9094");
properties.put(ConsumerConfig.GROUP_ID_CONFIG, appName);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "3000");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, MessageDeserializer.class.getName());
properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, pulSize + "");
properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "10000");
properties.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, "3000");
properties.put(ConsumerConfig.RECONNECT_BACKOFF_MS_CONFIG, "1000");

Scenario 1:

When node0 stopped, consumers do not consume messages, but the producer produce messages and when start node0, everything is ok.

Scenario 2:

When node1 or node2 stopped, consumers consume messages and the producer produce messages and everything is ok.

Why failover not worked in scenario 1?

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • 1
    Can you share the producer logic? Also have you checked the `__consumer_offsets` topic was also created with 3 replicas? – Mickael Maison Jan 20 '19 at 12:59
  • Note that using one machine to truly test failover isn't really a good test. You also need to account for shifts in network traffic to different hosts (e.g. use 3 VMs) – OneCricketeer Jan 20 '19 at 15:11
  • Possible duplicate of [Kafka Consumer does not receive data when one of the brokers is down](https://stackoverflow.com/questions/53771673/kafka-consumer-does-not-receive-data-when-one-of-the-brokers-is-down) – OneCricketeer Jan 20 '19 at 15:13
  • __consumer_offset topic created with 1 replica – lord sadeghi Jan 26 '19 at 05:07

0 Answers0