23

I am facing the following error while enabling SASL on Zookeeper and broker authentication.

[2017-04-18 15:54:10,476] DEBUG Size of client SASL token: 0 
(org.apache.zookeeper.server.ZooKeeperServer)
[2017-04-18 15:54:10,476] ERROR cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. (org.apache.zookeeper.server.    ZooKeeperServer)
[2017-04-18 15:54:10,478] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-04-18 15:54:10,478] DEBUG Received event: WatchedEvent state:AuthFailed type:None path:null (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Leaving process event (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient... (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-04-18 15:54:10,478] DEBUG Closing ZooKeeper connected to localhost:2181 (org.I0Itec.zkclient.ZkConnection)
[2017-04-18 15:54:10,478] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient...done (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,480] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
    at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
    at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
    at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
    at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
    at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
    at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
    at kafka.Kafka$.main(Kafka.scala:67)
    at kafka.Kafka.main(Kafka.scala)
[2017-04-18 15:54:10,482] INFO shutting down (kafka.server.KafkaServer)

Following configuration is given in the JAAS file, which is passed as KAFKA_OPTS to take it as JVM parameter:-

  KafkaServer {
       org.apache.kafka.common.security.plain.PlainLoginModule required
       username="admin"
       password="admin-secret"
       user_admin="admin-secret";
  };

  Client {
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="admin"
      password="admin-secret";
  };

kafka broker's server.properties has following extra fields set:-

zookeeper.set.acl=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.endpoint.identification.algorithm=HTTPS
ssl.keystore.location=path
ssl.keystore.password=anything
ssl.key.password=anything
ssl.truststore.location=path
ssl.truststore.password=anything

Zookeeper properties are as follows:

 authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
bschlueter
  • 3,817
  • 1
  • 30
  • 48
sunder
  • 968
  • 2
  • 11
  • 31

2 Answers2

56

I found the issue by increasing the log level to DEBUG. Basically follow the steps below. I don't use SSL but you will integrate it without any issue.

Following are my configuration files:

server.properties

security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
auto.create.topics.enable=false
broker.id=0
listeners=SASL_PLAINTEXT://localhost:9092
advertised.listeners=SASL_PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

advertised.host.name=localhost
num.partitions=1
num.recovery.threads.per.data.dir=1
log.flush.interval.messages=30000000
log.flush.interval.ms=1800000
log.retention.minutes=30
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
super.users=User:admin

zookeeper.properties

dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

producer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none

consumer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

Now are the most important files for making your server starting without any issue:

zookeeper_jaas.conf

Server {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

kafka_server_jaas.conf

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

Client {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret";
};

After doing all these configuration, on a first terminal window:

Terminal 1 (start Zookeeper server)

From kafka root directory

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/zookeeper_jaas.conf"
$ bin/zookeeper-server-start.sh config/zookeeper.properties

Terminal 2 (start Kafka server)

From kafka root directory

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/kafka_server_jaas.conf"
$ bin/kafka-server-start.sh config/server.properties

[BEGIN UPDATE]

kafka_client_jaas.conf

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};

Terminal 3 (start Kafka consumer)

On a client terminal, export client jaas conf file and start consumer:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties  --bootstrap-server=localhost:9092

Terminal 4 (start Kafka producer)

If you also want to produce, do this on another terminal window:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties

[END UPDATE]

akki
  • 2,021
  • 1
  • 24
  • 35
Maximilien Belinga
  • 3,076
  • 2
  • 25
  • 39
  • 1
    Yeah!! it worked fine. Thanks a lot for ur help, i have been struggling since last 2 days – sunder Apr 18 '17 at 14:59
  • 1
    Just curious to know the real cause of failure. Was it zookeeper is expecting a Server {} configuration? – sunder Apr 18 '17 at 15:05
  • Exactly. It was the missing piece – Maximilien Belinga Apr 18 '17 at 15:08
  • any idea if some special configuration is required for producer and consumer's communication. I am facing Bootstrap broker localhost: disconnected (org.apache.kafka.clients.NetworkClient) for producer – sunder Apr 20 '17 at 10:45
  • Did you add `client jaas`? It's probably the missing piece in your stuff. Before producing or consuming, you need to export a `client jaas`. – Maximilien Belinga Apr 20 '17 at 11:16
  • Where exactly this file has to be provided. Don't you think 'kafka_server.jaas' has a Client section and exporting it in java.security.auth.login.config parameter in JVM would suffice? I am running producer in the terminal 2 in your answer, so which already has config file imported. – sunder Apr 20 '17 at 13:17
  • Correct me if i am wrong, but i felt from producer config that it is connecting to kafka broker and not zookeeper directly. So in that case why would producer need that config file? – sunder Apr 20 '17 at 13:19
  • Producers/consumers need to be authenticated before trying to produce/consume. The client jaas is used for this purpose. Let me update my post and you will see what I mean – Maximilien Belinga Apr 20 '17 at 15:45
  • 1
    You have a typo in your zookeeper_jaas.conf file. Missing a semicolon at the end of user_admin="admin-secret"; – user2687486 Aug 01 '17 at 20:23
  • typo kafka_server.jaas -> kafka_server_jaas.conf – prehistoricpenguin Nov 08 '18 at 10:51
  • Thanks for the note. Updated – Maximilien Belinga Nov 12 '18 at 11:00
  • I did followed this but I am getting one error as: ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient) javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null. Any help? – Tushar H Jan 30 '19 at 07:25
  • @M.Situation This configuration you specified is for zookeeper in kafka package, what should be the config for standalone zookeeper? – Tushar H Feb 15 '19 at 12:24
  • 9
    I think it needs some updates, I have tried this and I faced error in that as zookeeper does not support - PlainLoginModule, it uses DigestLoginModule. so the changes would be to replace "org.apache.kafka.common.security.plain.PlainLoginModule required" by "org.apache.zookeeper.server.auth.DigestLoginModule required" at zookeeper_jaas.conf and Client section of kafka_server_jaas.conf – Tushar H Feb 15 '19 at 13:50
13

You need to create a JAAS config file for Zookeeper and make it use it.

Create a file JAAS config file for Zookeeper with a content like this:

Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    user_admin="admin-secret";
};

Where user (admin) and password (admin-secret) must match with username and password that you have in Client section of Kafka JAAS config file.

To make Zookeeper use the JAAS config file, pass the following JVM flag to Zookeeper pointing to the file created before.

-Djava.security.auth.login.config=/path/to/server/jaas/file.conf"

If you are using Zookeeper included with Kafka package you can launch Zookeeper like this, assuming that your Zookeeper JAAS config file is located in ./config/zookeeper_jaas.conf

EXTRA_ARGS=-Djava.security.auth.login.config=./config/zookeeper_jaas.conf ./bin/zookeeper-server-start.sh ./config/zookeeper.properties 
Luciano Afranllie
  • 4,053
  • 25
  • 23