4

I am required by our client to run Apache Kafka in linux container on Windows Server 2019 with LCOW. I am using docker-compose to bring up two containers and this is my docker-compose.yml file:

version: "3"

services:

  zookeeper:
    image: 'bitnami/zookeeper:latest'
    container_name: test-zoo

    ports:
      - '2181:2181'
    volumes:
      - type: bind
        source: C:\\test\\persist
        target: /bitnami
    environment: 
      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:
    image: 'bitnami/kafka:latest'
    container_name: test-kafka
    deploy:
      resources:
        limits:
          memory: 2G
    ports:
      - '9092:9092'
    volumes:
      - type: bind
        source: C:\\test\\persist
        target: /bitnami
    environment:
      - KAFKA_BROKER_ID=1311
      - KAFKA_CFG_RESERVED_BROKER_MAX_ID=1000000
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092    
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092    
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - KAFKA_CFG_LOG_DIRS=/bitnami/kafka/logs 
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper

If I remove configuration concerning volumes the containers will work seamlessly and I can communicate with them without issues. Caveat is that I need persistent storage to save queues current status of both kafka and zookeeper. That's the reason why I created volumes to persist storage on local drive on Windows Server.

If I delete those local directories, when bringing docker up with docker-compose the directories are recreated - so it seems that the configuration is good, but it obviously there is some issue when writing data from inside container because this is where the things start to go wrong. If I bring containers down, kafka container won't start up anymore until I again delete the directories on local disk - they are almost empty, just few small files but not all the files from inside container.

I found this solution here: https://stackoverflow.com/a/56252052/6705092 but it is meant for docker-desktop that I am not allowed to use - just pure cli and docker-compose. This article basically says that you need to share this volumes inside docker-desktop, and when I do this everything works well.

So, the question is there a way to simulate same action (Share Volumes) from docker-desktop into docker-compose? Maybe some hidden unknown configuration switch or something else?

EDIT:

As requested in comments, this is the docker inspect of bitnami-kafka container under docker-desktop for volume sharing, where file persistance works well:

 "Mounts": [
        {
            "Type": "bind",
            "Source": "C:/dokit/persist",
            "Destination": "/bitnami",
            "Mode": "",
            "RW": true,
            "Propagation": "rprivate"
        }
    ]

I also learned somewhere that docker-desktop under Windows uses FUSE as a filesharing system, but I can't replicate this on docker-host.

Branko Radojevic
  • 660
  • 1
  • 5
  • 14
  • 1
    The volumes I use in this repo work fine across restarts - https://github.com/OneCricketeer/apache-kafka-connect-docker ... Why do you need to mount an actual Windows folder? What do you plan on doing with those files from Windows? – OneCricketeer Jul 26 '22 at 01:48
  • 1
    @OneCricketeer actually I don't need those volumes under windows at all. I just need to have persistent volumes across restarts - nothing else. Will take a look at your repo, thanks. – Branko Radojevic Jul 26 '22 at 05:04

2 Answers2

1

Not sure about LCOW, but try using a Docker volume rather than a directory mount

# zookeeper
    volumes:
      - 'zookeeper_data:/bitnami/zookeeper'
# kafka 
    volumes:
      - 'kafka_data:/bitnami/kafka'

volumes:
  zookeeper_data:
    driver: local
  kafka_data:
    driver: local

This is copied from their compose file - https://github.com/bitnami/bitnami-docker-kafka/blob/master/docker-compose.yml

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • We tried to use those windows mount points as a possible solution for the fact that in our case docker volumes did not work. Under Linux -> yes, of course, under LCOW -> no. The issue is somewhere with permissions on windows folders (but, we set RW rights to everyone) or with path conversions (linux -> windows). Somehow docker-desktop does handle this well with the share volumes option, but we don't know how to mimic same configuration in cli or undoer compose. – Branko Radojevic Jul 27 '22 at 07:49
  • If it works from Docker desktop, you should run `docker inspect` on each container and look how the mounts are defined. Then work on translating that to compose – OneCricketeer Jul 27 '22 at 13:42
  • I've edited my question with additional info from docker inspect. It took me some time to reinstall docker-desktop on the same server. Actually, it doesn't say too much. The only additional information that I found is that it is using FUSE as a filesharing system. – Branko Radojevic Aug 07 '22 at 06:55
1

There are 2 possible options:

Create a Environment Variable on the Windows Server

Variable Value
COMPOSE_CONVERT_WINDOWS_PATHS 1 or true

Create a .env file at same level as docker-compose.yml to make it portable project/product using the same variable

COMPOSE_CONVERT_WINDOWS_PATHS=1

With this variable Docker Compose performs path conversion from Windows-style to Unix-style in volume definitions.

djmonki
  • 3,020
  • 7
  • 18
  • Thanks for this tip. I've removed volume path bindings to windows path and added .env file and now it kind of works, but only once - first time. On second run kafka complains about logs in /bitnami/kafka/logs directory and exits. I can see that files that I've created actually appear in windows - so that should be working now. If I delete volumes manually and start again docker-compose than it recreates volumes and it works again until next restart. Any idea? – Branko Radojevic Jul 27 '22 at 17:48
  • I would not change your docker-compose configuration. Keep it as it is, my answer was to add to what you already have. The one thing I would change in the docker-compose config, taking into account the reason for the volumes is to persist data, change the following ```type: bind``` to ```type: volume``` for the ```volumes``` section. – djmonki Jul 27 '22 at 19:06
  • It doesn't work if you just change type: bind to type: volume, because I think that then you need to define volumes separately in configuration. And that was exactly what I did in my previous comment. – Branko Radojevic Jul 27 '22 at 19:52
  • 1
    Can you check something. Access your containers, for kafka - locate ```server.properties``` and check the value for ```log.dirs```, then for zookeeper - locate ```zookeeper.properties``` and the value for ```dataDir```. Check if they are both the persistent directories, if zookeeper data is located in ```/tmp/...```, it will get cleared out after a restart. This will cause a configuration mismatch because those files are missing, so kafka shuts down. If the data is located in ```/tmp/...``` change it to a similar directory as kafka and see how you go. – djmonki Jul 28 '22 at 18:05
  • Kafka: log.dirs=/bitnami/kafka/logs, Zookeeper: dataDir=/bitnami/zookeeper/data – Branko Radojevic Jul 28 '22 at 20:03
  • Ok, that's good. What is the actual exception thrown by kafka when it complains, regarding the logs / log directory ? – djmonki Jul 28 '22 at 20:59
  • ERROR Shutdown broker because all log dirs in /bitnami/kafka/logs have failed (kafka.log.LogManager) – Branko Radojevic Jul 29 '22 at 04:45
  • I wonder whether kafka is having an issue due to it sharing the persistent directory with zookeeper? Can we modify the volumes slightly to ```C:\\test\\persist\\zookeeper``` and ```C:\\test\\persist\\kafka``` appropriately. See if that helps. – djmonki Jul 29 '22 at 13:33
  • My thoughts are, when kafka restarts, zookeeper takes ownership of the volume, so when kafka has started, it does not have permission to write to the volume. Just an idea. – djmonki Jul 29 '22 at 13:45
  • Unfortunately I get the same error :( – Branko Radojevic Jul 30 '22 at 04:58