4

For reference I've looked at the following links.

I understand that I'm doing is wrong and I'm trying to avoid relative path and changing things in via sys.path as much as possible, though if those are my only options, please help me come up with a solution.

Note, here is an example of my current working directory structure. I think I should add a little more context. I started off adding __init__.py to every directory so they would be considered packages and subpackages, but I'm not sure that is what I actually want.

myapp/
    pack/
        __init__.py
        helper.py
    runservice/
        service1/
             Dockerfile
        service2/
            install.py
            Dockerfile

The only packages I will be calling exist in pack/ directory, so I believe that should be the only directory considered a package by python.

Next, the reason why this might get a little tricky, ultimately, this is just a service that builds various different containers. Where the entrypoints will live in python service*/install.py where I cd into the working directory of the script. The reason for this, I don't want container1 (service1) to know about the codebase in service2, as its irrelevant I would like and the code to be separated.

But, by running install.py, I need to be able to do: from pack.helper import function but clearly I am doing something wrong.

Can someone help me come up with a solution, so I can leave my entrypoint to my container as cd service2, python install.py.

Another important thing to note, within the script I have logic like:

if not os.path.isdir(os.path.expanduser(tmpDir))

I am hoping any solution we come up with, will not affect the logic here?

I apologize for the noob question.

EDIT:

Note, I I think I can do something like

sys.path.append(os.path.join(os.path.dirname(__file__), '..'))

But as far as I understand, that is bad practice....

Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
Chris
  • 367
  • 4
  • 16

1 Answers1

3

Fundamentally what you've described is a supporting library that goes with a set of applications that run on top of it. They happen to be in the same repository (a "monorepo") but that's okay.

The first step is to take your library and package it up like a normal Python library would be. The Python Packaging User Guide has a section on Packaging and distributing projects, which is mostly relevant; though you're not especially interested in uploading the result to PyPI. You at the very least need the setup.py file described there.

With this reorganization you should be able to do something like

$ ls pack
pack/  setup.py
$ ls pack/pack
__init__.py  helper.py
$ virtualenv vpy
$ . vpy/bin/activate
(vpy) $ pip install -e ./pack

The last two lines are important: in your development environment they create a Python virtual environment, an isolated set of packages, and then install your local library package into it. Still within that virtual environment, you can now run your scripts

(vpy) $ cd runservice/service2
(vpy) $ ./install.py

Your scripts do not need to modify sys.path; your library is installed in an "expected" place.

You can and should do live development in this environment. pip install -e makes the virtual environment's source code for whatever's in pack be your actual local source tree. If service2 happens to depend on other Python libraries, listing them out in a requirements.txt file is good practice.

Once you've migrated everything into the usual Python packaging scheme, it's straightforward to transplant this into Docker. The Docker image here plays much the same role as a Python virtual environment, in that it has an isolated Python installation and an isolated library tree. So a Dockerfile for this could more or less look like

FROM python:2.7

# Copy and install the library
WORKDIR /pack
COPY pack/ ./
RUN pip install .

# Now copy and install the application
WORKDIR /app
COPY runservice/service2/ ./
# RUN pip install -r requirements.txt

# Set standard metadata to run the application
CMD ["./install.py"]

That Dockerfile depends on being run from the root of your combined repository tree

sudo docker build -f runservice/service2/Dockerfile -t me/service2 .

A relevant advanced technique is to break this up into separate Docker images. One contains the base Python plus your installed library, and the per-application images build on top of that. This avoids reinstalling the library multiple times if you need to build all of the applications, but it also leads to a more complicated sequence with multiple docker build steps.

# pack/Dockerfile
FROM python2.7
WORKDIR /pack
COPY ./ ./
RUN pip install .
# runservice/service2/Dockerfile
FROM me/pack
WORKDIR /app
COPY runservice/service2/ ./
CMD ["./install.py"]
#!/bin/sh
set -e
(cd pack && docker build -t me/pack .)
(cd runservice/service2 && docker build -t me/service2 .)
David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Amazing, this was it, thanks for the assist here, wasn't as hard as I initially thought. – Chris Jul 19 '19 at 01:29