I have an environment.yml file for building my conda environment, where one package (a python wrapper to another) depends on the .h/.so files provided by another. Let's say:
channels:
- defaults
- conda-forge
dependencies:
- blah=1.2.1
- pip:
- git+https://github.com/pyBlah.git
The blah
package by itself installs correctly, and the header/.so files are indeed installed at the appropriate location (inside the conda environment's include
and lib
folders respectively), but the pip install
line fails because it fails to find the headers.
I could remove the pip
line in environment.yml above and try to install it after creating/activating the environment, like so:
CFLAGS=-I/path/to/env/include LDFLAGS=-L/path/to/env/lib pip install git+https://github.com/pyBlah.git
or follow something like Conda set LD_LIBRARY_PATH for env only
to avoid having to explicitly indicate paths during pip install
, but having to do either of these seems like providing too much detail, and breaking what should have been one single installation step into many.
My thinking is that conda must be doing something like this internally to resolve/build/install related packages that it finds in environment.yml anyway.
So my questions is twofold:
Is there a reason conda doesn't add appropriate include/lib paths automatically, since otherwise how is
pip
(which is environment-specific after all) expected to do it's job properly once the environment is activated?Is there a canonical way of accomplishing dependent package installations just using appropriate specifiers in environment.yml?