1

I want to get hardware specific info from host into the docker container.

I have 2 Docker web apps client and server being deployed to a Docker swarm. I deploy server as a single container that connects to a database, and I deploy client as a replicated service to each worker node in cluster. client sends data back to server, and server keeps a database of tables that track data sent from the clients by unique identifier (MAC address).

I start the client python app in the Dockerfile as follows, so that it knows its MAC address when it sends data back to server.

python app.py --mac-address=12345

client currently starts up using a hard coded mac-address that I manually type in. I develop on Mac, but deploy to Linux boxes, so some of the workarounds for giving containers access to the host machines' network interfaces does not seem to work in my development environment.

Ideally, I would like to pipe this value (any unique identifier) in from some bash command / script. I have the following script to capture MAC address:

mac=$(ifconfig eth0 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}')
echo $mac > 'mac.txt'

But, this has to be run on the host.

I have come across solutions such as:

docker run -e HOST_MAC=$(ifconfig -a | grep -Po 'HWaddr \K.*$') image

And then accessing the environment variable within the python application.

Or this solution:

docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

Each solution is running the container individually. I am using docker-compose or docker stack deploy from a single manager. So, I do not want to assume that script file is already present on each host machine (worker node).

I lastly thought of "SCPI-ing" the file out of the container to the host, then "SSH-ing" from container to host, executing the script, and capturing the output. But, I have not found the syntax for this. Something like:

scp mac_address.sh user@host_hostname:
max=$(ssh user@host_hostname "mac_address.sh")

I don't care if the unique identifier is MAC address, or some other identifier; but, essentially I would like this unique identifier to persist even if a container goes down and a new one comes up.

Has anyone done something similar?

  • What's wrong with using the container's MAC address? – Attie Feb 06 '18 at 12:48
  • Lets say i kill the container, i tag a new image, and start it back up. IP and everything comes back different, and MAC comes back empty because it does not have access to my host device network interfaces. This is not useful because the second time i start a new container from a newly tagged image, `client` will phone home as a differnt MAC or empty MAC, and i will have two entries in `server` from technically the same physical device but under addresses specific to the container than ran at a given time. –  Feb 06 '18 at 12:54

1 Answers1

3

I'm not convinced that this is the best thing to use as a "Unique ID". You've already stated that you want to purposefully re-use the ID, so be careful referring to it as "unique".

As I'm sure you're aware, you will need to be careful of this approach - if you were to run two instances on a physical machine they would identify as "the same" node.

I also cannot stress this enough: Do not use this identifier for any authentication purposes. This includes attributing reported data to a particular node / device / user. I would hope that retrieving information from your system would require proper authentication.

If you intend to use this "Unique ID" for purposes mentioned above, then consider a cryptographic method instead.


Some options, all of which should have static resolutions.

Map the MAC address into the container's VFS

This might be linux-only due to sysfs, I'm afriad I can't comment.

$ docker run --rm -it -v /sys/class/net/enp0s31f6/address:/tmp/host_mac:ro \
    ubuntu:latest bash
root@ab868434bf02:/# cat /tmp/host_mac
70:85:c2:28:fa:c2

Use the Host's container-side MAC address

This can be addressed without installing tools if you are up for writing the Python. This is tied to the virtual interface, so if you're in the habit of adding/removing virtual networks, it may/will change.

$ docker run --rm -it ubuntu:latest bash
root@9141327276c7:/# apt-get update && apt install -y iputils-ping net-tools
root@9141327276c7:/# HOST_IP="$(route -n | grep '^0\.0\.0\.0 ' | awk '{print $2}')"
root@9141327276c7:/# ping -c1 -w 500 ${HOST_IP} >/dev/null
root@9141327276c7:/# arp | grep "^${HOST_IP} " | awk '{print $3}'
02:42:1d:da:1c:e7
Attie
  • 6,690
  • 2
  • 24
  • 34
  • Regarding your first solution, would that require me to use the ubuntu image. I currently use a Docker image from python. But if I map the volume the same, instead of reading the file in using `cat` (linux command) I could read that file in within Python, but the file will still be there, correct? As long as I map the file from the host machine to a directory that I know is available in Python's Docker image. –  Feb 06 '18 at 21:43
  • and regarding your comment about authentication. This is not for anything serious. Its a rewrite of proof of concept code that previously was a pain to deploy to several physical devices. Now its easier since they are in the docker swarm, but there are now new challenges getting some of that hardware information that wasn't an issue when the code ran right on the command line as a cron job. –  Feb 06 '18 at 21:47
  • @JabariDash Correct - just use Python to read the file instead of `cat`. – Attie Feb 07 '18 at 12:58