42

I'm starting with Docker, but I don't know how to configure PyCharm to use a python interpreter located in a container.

It was easy to setup with Vagrant, but there's apparently no official way to do it with Docker yet.

Should I prepare special Docker image with exposed ssh port? How to do that more easily?

Anto
  • 6,806
  • 8
  • 43
  • 65
trikoder_beta
  • 2,800
  • 4
  • 15
  • 17
  • 2
    [SSH inside Docker containers is considered a bad practice](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/), but so far I can't come up with any better solution... – Anto Feb 23 '15 at 17:03
  • 1
    They do have an issues on their issue tracker for it now: https://youtrack.jetbrains.com/issue/PY-15476 – saul.shanabrook Apr 14 '15 at 13:34

10 Answers10

14

UPDATE: PyCharm 2017.1 has a solution for this problem, see this blog entry

Here is how I solved the problem. My circumstances are that I was assigned to do an intervention on a specific area of a web app that used docker-compose to create a set of four containers. Docker-compose is a kind of meta docker that manages multiple docker containers from one command. I did not want to mangle their existing setup since so many things depend on it. But since I was working on one specific part in one of the images I decided that I would extend one of the containers with ssh so that I could debug from PyCharm. Further, I wanted the app to run as normal when started and only by forcing it to quit and then connecting to it from PyCharm would I have a debuggable component. Here is what I did on my mac that uses boot2docker (on VirtualBox) to setup docker correctly.

First, I need to extend the target container, called jqworker. I am going to use "supervisior" to do the heavy lifting of managing things.

FROM jqworker

# Get supervisor to control multiple processes, sshd to allow connections.
# And supervisor-stdout allows us to send the output to the main docker output.
RUN apt-get update && apt-get install -y supervisor openssh-server python-pip \
  && pip install supervisor-stdout \
  && mkdir -p /var/run/sshd  \
  && mkdir -p /var/log/supervisor \
  && mkdir -p /etc/supervisor/conf.d

COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# Fix up SSH, probably should rip this out in real deploy situations.
RUN echo 'root:soup4nuts' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

# Expose SSH on 22, but this gets mapped to some other address.
EXPOSE 22

# Replace old entrypoint with supervisiord, starts both sshd and worker.py
ENTRYPOINT ["/usr/bin/supervisord"]

Supervisor lets me run multiple tasks from one command, in this case the original command and SSHD. Yes, everyone says that SSHD in docker is evil and containers should this and that and blah blah, but programming is about solving problems, not conforming to arbitrary dicta that ignore context. We need SSH to debug code and are not deploying this to the field, which is one reason we are extending the existing container instead of adding this in to the deployment structure. I am running it locally so that I can debug the code in context.

Here is the supervisord.conf file, note that I am using the supervisor-stdout package to direct output to supervisor instead of logging the data as I prefer to see it all in one place:

[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D

[program:worker]
command=python /opt/applications/myproject/worker.py -A args
directory=/opt/applications/myproject
stdout_events_enabled=true
stderr_events_enabled=true

[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler

I have a build directory containing the above two files, and from a terminal in there I build the Dockerfile with:

docker build -t fgkrqworker .

This adds it so that I can call it from docker or docker-compose. Don't skip the trailing dot!

Since the app uses docker-compose to run a set of containers, the existing WORKER container will be replaced with one that solves my problems. But first I want to show that in another part of my docker-compose.yml I define a mapping from the containers to my local hard drive, this is one of a number of volumes being mapped:

volumes: &VOLUMES
  ? /Users/me/source/myproject:/opt/applications/myproject

Then the actual definition for my container, which references the above VOLUMES:

jqworker: &WORKER
  image: fgkrqworker
  privileged: true
  stdin_open: true
  detach: true
  tty: true
  volumes:
    <<: *VOLUMES
  ports:
    - "7722:22"

This maps the SSH port to a known port that is available in the VM, recall I am using boot2docker which rides on VirtualBox, but the needs to be mapped out to where PyCharm can get at it. In VirtualBox, open the boot2docker VM and choose Adapter 1. Sometimes the "Attached to:" combo unselects itself, so watch for that. In my case it should have NAT selected.

Click "Port Forwarding" and map the inner port to the a port on localhost, I choose to use the same port number. It should be something like:

  • Name: ssh_mapped;
  • Protocol: TCP;
  • Host IP:127.0.0.1;
  • Host Port:7722;
  • Guest IP:;
  • Guest Port: 7722

Note: be careful not to change the boot2docker ssh setting or you will eventually be unable to start the VM correctly.

So, at this point we have a container that extends my target container. It runs ssh on port 22 and maps it to 7722 since other containers might want to use 22, and is visible in the VirtualBox environment. VirtualBox maps 7722 to 7722 to the localhost and you can ssh into the container with:

ssh root@localhost -p 7722

Which will then prompt for the password, 'soup4nuts' and you should be able to locate something specific to your container to verify that it is the right one and that everything works OK. I would not mess with root if I were deploying this anywhere but my local machine, so be warned. This is only for debugging locally and you should think twice or thrice about doing this on a live site.

At this point you can probably figure the rest of it out if you have used PyCharm's remote debugging. But here is how I set it up:

First, recall that I have docker-compose.yml mapping the project directory:

? /Users/me/source/myproject:/opt/applications/myproject 

In my container /opt/applications/myproject is actually /Users/me/source/myproject on my local hard drive. So, this is the root of my project. My PyCharm sees this directory as the project root and I want PyCharm to write the .pycharm_helpers here so that it persists between sessions. I am managing source code on the mac side of things, but PyCharm thinks it is a unixy box elsewhere. Yes, it is a bit of kludge until JetBrains incorporates a Docker solution.

First, go to the Project X/Project Structure and create a Content Root of the local mapping, in my case that means /Users/me/source/myproject

Later, come back and add .pycharm_helpers to the excluded set, we don't want this to end up in source control or confuse PyCharm.

Go to the Build, Execution, Deployment tab, pick Deployment and create a new Deployment of SFTP type. The host is localhost, the port 7722, the root path is /opt/applications/myproject and the username is root and password is soup4nuts and I checked the option to save the password. I named my Deployment 'dockercompose' so that I would be able to pick it out later.

On the Deployment Mappings tab I set the local path to /Users/me/source/myproject and deployment and web path to a single '/' but since my code doesn't correspond to a URL and I don't use this to debug, it is a placeholder in the Web Path setting. I don't know how you might set yours.

On the Project X/Project Interpreter tab, create a new Remote Python Interpreter. You can pick the Deployment Configuration and choose the dockercompose configuration we created above. The host URL should fill in as ssh://root@localhost:7722 and the Python Interpreter Path will likely be /usr/bin/python. We need to set the PyCharm Helpers Path as the default will not survive the container being redone. I actually went to my project local directory and created a .pycharm_helpers directory in the root, then set the path here as /opt/applications/myproject/.pycharm_helpers and when I hit the OK button it copied the files "up" to the directory. I don't know if it will create it automatically or not.

Don't forget that the .pycharm_helpers directory should probably be excluded on the project roots tab.

At this point you can go to the Build, Execution, Deployment tab, and under Console/Python Console, pick the remote interpreter we created above and set the working directory to /opt/applications/myproject and you can run your Python Console in the container if you like.

Now you need to create a Run Configuration so that you can remotely debug your python code. Make a new Python configuration and set the script to the one that used to start the python code in the container. Mine, from the supervisor setup, above is:

/opt/applications/myproject/worker.py -A args

So I set the script to /opt/applications/myproject/worker.py and the parameters to -A args.

Choose the remote interpreter we created above, and the working directory as needed, for me it is /opt/applications/myproject and for me that does the job.

Now I want to enter my container and stop the worker.py script so I can start up a debug version. Of course, if you like you can ignore running the script by default and only use the container for debugging.

I could open a ssh session to stop the script, but docker provides a useful command that will do the work for me by passing it into the environment.

$> docker exec -i -t supervisorctl stop worker

As my process is named 'worker'. Note that you can restart by replacing the stop command with start.

Now, in PyCharm start a debug session with the Run Configuration created above. It should connect and start things up and give you console output in the window. Since we killed the one that Supervision originally started it is no longer connected.

This was a seat of the pants operation, so there may be errors and incorrect assumptions I didn't notice. Particularly, the PyCharm setup required a few iterations, so the order may be incorrect, try going through it again if it fails. This is a lot of stuff and easy to skip something critical.

J. Scott Elblein
  • 4,013
  • 15
  • 58
  • 94
Fran K.
  • 866
  • 10
  • 23
  • Thank you for your detailed explanation. Did you manage to debug a python class inside docker container? I managed to run the code successfully, however when trying to debug using the remote interpreter it fails trying to open additional ports. – Elad92 Jul 02 '15 at 16:33
  • @Elad92 Yes, I have. However, there seem to be a few python paths that are not set (correctly), or some side effect that looks like this. I suspect that the interpreter is misconfigured. Unfortunately I haven't had a chance to dig into this, but it looks the kind of thing where one could dump the paths while debugging and while running the "plain" worker and find out which packages are missing. I'll have to get back to this, but have been stuck working on some critical non-python issues, so if anyone else figures it out, please add your discovery here. – Fran K. Jul 02 '15 at 17:21
  • @Elad92 If your python code is exposing ports, as opposed to connecting to ports, might want to check out how port mapping works in docker and the VM you are using. After months of use this still catches me up. I've come to depend on `docker inspect' to track down these kinds of problems. – Fran K. Jul 02 '15 at 17:25
  • I just noticed today that PyCharm has a plugin for Docker Deployment. Not sure if this allows debugging inside the container, but I will experiment with it and see if I can get it to avoid all of the messy stuff I now use. They have a blog entry at blog.jetbrains.com/idea/2015/03/docker-support-in-intellij-idea-14-1 – Fran K. Aug 27 '15 at 14:58
  • The Docker plugin doesn't seem to allow debugging inside the container, but it seems to provide some simple Docker image support for deploying, which is nice, but that stuff is easy and isn't anywhere as critical as plug and play debugging would be. Still, at least it is moving forward and given the popularity of Docker and JetBrains' efforts to support devs, I think it likely they'll get there. – Fran K. Aug 27 '15 at 15:11
  • Note that the new v5 of PyCharm is supposed to solve the problem I was trying to address in this overlong solution. As of yet I have not had a chance to test it, but will wait until I upgrade to docker-machine to get on the newest stuff all around. – Fran K. Dec 05 '15 at 01:03
  • PyCharm 2017.1 has a solution that is much much better than this technique: https://blog.jetbrains.com/pycharm/2017/03/docker-compose-getting-flask-up-and-running/ – Fran K. May 17 '17 at 13:48
  • Hey @FranK. I've documented my trek through this solution. In truth I believe I still have some parts of it wrong so any feedback you could provide would be helpful. I am able to debug. – Marc Oct 25 '17 at 16:38
  • @Marc, I haven't had a chance to try anything for a while since I have not used Python in docker for over a year. However, I do keep my PyCharm up to date and the recent releases have added a LOT of Docker support. I'd try that before doing any new work with the hack I document here! :-D – Fran K. Oct 25 '17 at 20:04
  • I think what the current release is not catering for is the scenario where you have a container on a REMOTE server, already running, and you want to run your containers inside that. I think the best solution in that case is still to have an sshd running in the container.... It just seems to be so much easier to treat the container as a traditional sshd target (a lot of tooling / debugging has support already for that). Would you still use supervisord for this? systemd? – lucid_dreamer May 07 '18 at 03:52
4

In order to avoid any SSH overhead (which makes perfect sense with Docker), docker exec definitely seems to be the way to go.
Unfortunately I couldn't get it to work so far. It would be great if someone could fill in the blanks. Here is what I did (using PyCharm 4.0.4 and Docker 1.4.1):

  1. Create a file named python_myproject.sh containing the following:

    #!/bin/bash
    docker exec -i myproject_container /path/to/containers/python2.7
    

    Note that the file's name has to begin with python otherwise PyCharm will complain.

  2. In PyCharm's settings, under Project Interpreter, add a new local interpreter. Give it the path to your python_myproject.sh file.


This is where I'm stuck. After a quite long loading time (the throbber says "Setting up library files"), a window entitled "Invalid Python SDK" appears and says:

Cannot set up a python SDK
at /path/to/python_myproject.sh.
The SDK seems invalid.

In ~/.PyCharm40/system/log/.idea:

2015-02-19 17:33:30,569 [ 166966]   WARN - ution.process.OSProcessHandler - Cannot kill process tree. Trying to destroy process using Java API. Cmdline:
2015-02-19 17:34:30,628 [ 227025]   WARN - ution.process.OSProcessHandler - Cannot kill process tree. Trying to destroy process using Java API. Cmdline:
2015-02-19 17:34:30,653 [ 227050]   INFO - rains.python.sdk.PythonSdkType - 
Timed out
Anto
  • 6,806
  • 8
  • 43
  • 65
  • This doesn't work because PyCharm expects and actual Python interpreter, and does much more than just calling it with parameters. – taleinat Feb 24 '15 at 18:12
  • 2
    The script might not be working properly because it is not passing the command line arguments on to the python interpreter. Try adding `"$@"` at the end of the `docker exec` command. – taleinat Feb 27 '15 at 10:47
  • @taleinat: it definitely went one step further thanks to this suggestion: PyCharm could establish the version of the Docker's python (2.7.9) ! But unfortunately still ends up with `The SDK seems invalid`, and PyCharm's log file says: `INFO - rains.python.sdk.PythonSdkType - /path/to/containers/python2.7: can't open file '~/.pycharm-4.0.4/helpers/syspath.py': [Errno 2] No such file or directory` – Anto Feb 27 '15 at 13:24
  • Finally, I switched to Vagrant. Imho, for a (PyCharm-based) dev environment, this is way easier to use and configure. – Anto Mar 09 '15 at 00:09
  • 1
    Still thinking on how to get it working with Docker, you should try copying the directory `~/.pycharm-4.0.4/helpers/` into the Docker instance (to the same path!) before running the `docker exec` command. That should at least get past the most recent error. If that works, the bash script could be updated to initially copy the directory if it doesn't exist in the Docker instance. – taleinat Mar 11 '15 at 06:32
  • Excellent idea, I'm gonna give it a try ! – Anto Mar 11 '15 at 14:13
4

It's not yet here, but shortly this should no longer be a problem, since

Docker support will be introduced in PyCharm starting with PyCharm 4.1 EAP (beginning of April)

source: http://blog.jetbrains.com/pycharm/2015/03/feature-spotlight-python-remote-development-with-pycharm/#comment-187772

noisy
  • 6,495
  • 10
  • 50
  • 92
2

I don't think it's so bad to include SSH inside your container if you really need it. Yes, it's not essential in other use cases since the introduction of docker exec but since Intellij/PyCharm only support remote interpreter via SSH, it's OK.

You can use phusion/baseimage as a good starting point to build your own container with SSH and any version of Python you need (it comes by default with PY3).

Theoretically, it would be ideal to keep using Vagrant for this task as well, since it allows you to create a workflow that will work both on Windows/OS X machines (by using boot2docker) and Linux (native Docker).

Practically I wasn't able to make it work on OS X because of the double NAT layer you have to pass in order to get into the SSH service, and it looks like it's not possible to add extra interface to the Vagrant boot2docker box (Vagrant 1.7.2).

m1keil
  • 4,515
  • 22
  • 26
  • I'm definitely gonna switch to Vagrant for my dev environment; I've been struggling with Docker for weeks and it's going nowhere... – Anto Mar 02 '15 at 11:02
2

If all you need is to debug code which is launched inside docker container, you could use pycharm's python debug server feature. As for me, it is less troublesome way than accessing remote interpreter via SSH. Drawback of this solution is that for auto-complete and all this kind of stuff you should have local copy of container's interpreter and mark it as project's interpreter (works for auto-complete, but i'm not sure that it's possible to debug code from third-party libs in such case) or make container's interpreter files visible to pycharm (not tested at all). Also note that Python debug server is feature of Professional edition.

What you should do for debugging via Python debug server:

1) make sure that directory with your project is added into container. It could look like this line in Dockerfile:

ADD . /path/in/container

2) copy pycharm-debug.egg (pycharm-debug-py3k.egg for Python3) from directory where pycharm is installed on your host to directory in container, which is in container's PYTHONPATH. Path to pycharm-debug.egg on developer's host could be:

  • for Mac: /Applications/PyCharm.app/Contents/pycharm-debug.egg
  • for Linux: /opt/pycharm/pycharm-debug.egg

3) create Run/Debug configuration for launching Python debug server on host as described at To configure a remote debug server section of docs. Port is any host's port of your choice, but IP is address at which host is accessible from container. It could be:

  • if container run via boot2docker, likely, IP is 192.168.99.1 -- host's address at Host-only network with vbox machine
  • if host is Linux, IP can be found via ifconfig, for me it is:
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0

Also, don't forget to specify path mappings between project's path at developer's host and project's path at container.

This blog post also could be helpful for current step

4) launch this created configuration (for example, via Debug button, right from Run one)

5) create python script which would launch your project and add the following code for debug initialization as first lines of this script. (make sure that pycharm-debug.egg is in PYTHONPATH, or this code couldn't import pydevd):

   import pydevd
   pydevd.settrace('172.17.42.1', suspend=False, port=8765, stdoutToServer=True, stderrToServer=True)

6) Finally, you could set breakpoints and launch your application from host, in container via created script. For example:

docker-compose run 'container_name' python 'script_name' 'args'

On start, yours launching script will connect to Python debug server, which is running on host, and stop on breakpoints. Debugger features will be available as usual.

3ka5_cat
  • 121
  • 1
  • 12
1

Steps specific to PyCharm Professional Edition 2017.2(however they may work with PyCharm CE)

Here are a couple steps I took to get my setup working

Step 1: Environment

A few assumptions of the structure of your (or anyone who might be reading this) project:

bleh
├── README.md
├── api
│   ├── Dockerfile  <---- this is the one we want to debug
│   ├── config.example.ini
│   └── src
│       ├── __init__.py    <---- this is a pycharm project
│       ├── __main__.py    <---- this is a pycharm project
│       └── ...
├── proxy
│   ├── Dockerfile
│   ├── config.example.ini
│   └── src
│       ├── ...
│       └── ...
├── webserver
│   ├── Dockerfile
│   ├── config.example.ini
│   └── src
│       ├── ...
│       └── ...
├── frontend
│   ├── Dockerfile
│   ├── config.example.ini
│   └── src
│       ├── ...
│       └── ...
├── db
│   ├── Dockerfile
│   ├── ...
│   └── migrations
│       ├── ...
│       └── ...
└── docker-compose.yml
  • Note I'm using bleh as a my project name only as an example.
  • Note We're also going to assume that this project has the absolute location of /Users/myfunkyusername/Projects/bleh.
  • Note Obviously this is all random as far as naming and location is concerned, please make adjustments specific to your system/project
  • Note We're also going to assume that you wish to live debug the api service as shown later in the docker-compose.yml file
  • Note We're also going to assume a content of your api's one and only Dockerfile is as such

    FROM python
    ADD config.example.ini /etc/bleh/config.ini
    RUN chmod +x /usr/bin/bleh
    COPY ./src /usr/bin/bleh
    WORKDIR /usr/bin/bleh
    RUN pip install -r requirements.txt
    CMD ["sh", "-c", "python -m bleh --cfg=/etc/bleh/config.ini"]
    
  • Note We're assuming your one and only docker-compose.yml has these contents

    version: '2'
    services:
    
      api:
        build:
          context: ./api
        depends_on:
          - db
        expose:
          - "8080"
        networks:
          - default
    
      frontend:
        build:
          context: ./frontend
        ports:
            - "80:7000"
        networks:
          - default
    
      webserver:
        build:
          context: ./webserver
        depends_on:
          - frontend
        networks:
          - default
    
      proxy:
        build:
          context: ./proxy
        ports:
          - "80:80"
          - "443:443"
        depends_on:
          - webserver
          - api
        networks:
          - default
    
      db:
        build:
          context: ./db
        expose:
          - "3306"
        networks:
          - default
    
    networks:
      default:
        driver: bridge
    

Step 2: Create Docker-Machine

Create docker-machine specifically for the bleh project

docker-machine create bleh

Step 3: connect remote interpreter

  • From PyCharm / Preferences / Build, Execution, Deployment / Docker click +
  • Select the Docker machine radio button and highlight bleh's docker machine in the pull down
  • Select Apply
  • From PyCharm / Preferences / Project:bleh / Project Interpreter
  • Click the gear icon on the far right of the Project Interpreter field and select Add Remote
  • Select Docker radio button
  • With Server field, select previously created docker machine for this project
  • Select the docker image that holds your desired python interpreter for this project (e.g bleh_api)
  • No change to the Python interpreter path needed
  • Click OK

Step 4: configure remote debugger

  • From Run / Edit Configurations select + to add a configuration
  • Select Python
  • With Script field, use location of script file on the docker container that will be run (in this example it's /usr/bin/bleh/__main__.py as we're giving the absolute location of our target script)
  • With Script parameters field, supply CLI parameters, if any (mimics the Dockerfile's last CMD command, which is --cfg=/etc/bleh/config.ini)
  • With Python Interpreter field, select your previously established remote python interpreter
  • With Working directory field, select the directory where Script is located within the Docker container (e.g /usr/bin/bleh)
  • With Path mappings field, click the ... and select local (e.g /Users/myfunkyusername/Projects/bleh/api/src) and remote (e.g /usr/bin/bleh) as above
  • With Docker container settings field, click ...
    • ensure you have the correct docker container selected (e.g. bleh_api:latest)
    • Add port binding container/host that mimics what you have the in Dockerfile (e.g 8080/8080 and expose to 0.0.0.0 using the tcp protocol, now I haven't shown what your app structure is, but let's assume that you were sane and within your app are also specifying 8080 as the port where your'e serving your data.
    • Add volume bindings container/host /usr/bin/bleh / /Users/myfunkyusername/Projects/bleh/api/src
    • ensure Network mode (thanks Piotr) is set to <name_of_project_directory>_<name_of_network_from_compose_file> (e.g bleh_default, you can confirm with docker network ls from within the correct docker-machine)

Step 5: Bask in the Sun or Bash your head some more

These are the steps that got me to a working docker and PyCharm setup.

I don't pretend to be correct in each of these steps. I will gladly update any errors/improvements you find.

Marc
  • 4,820
  • 3
  • 38
  • 36
  • Easier to just add an sshd to the container, and treat as a normal remote debugger (redirecting 22 to 8022 host port)? – lucid_dreamer May 07 '18 at 01:53
  • @lucid_dreamer you're probably right from a dev perspective. For those that want to maintain the same structure on prod as dev environments what you propose may not be an attractive option as generally its frowned upon to open ssh on containers or even to have more than one service running on a container. – Marc May 07 '18 at 14:30
  • But would this work if the docker *engine* (==host) is not running locally? – lucid_dreamer May 10 '18 at 19:54
  • are you referring to development where the codebase is not on your local machine(say a prod environment)? or are you referring to a docker setup running inside another virtual machine, say vagrant? – Marc May 10 '18 at 21:38
  • Might be (1) a docker setup that runs on a separate machine on the same TCP *network* (might be a VM (vagrant or not) on the same machine, on a virtualbox bridge, or it might be a different physical machine on the same ethernet LAN), or (2) a docker setup that runs on a remote server accessible via ssh (the remote server might be physical or virtual, or vagrant managed or not: the only thing that matters is that I have ssh access to it (you can assume root)). If you have something that works for (2) I could use it for (1) as well. – lucid_dreamer May 10 '18 at 22:59
  • "are you referring to development where the codebase is not on your local machine(say a prod environment)" yes, but I disagree with your example of "a prod environment". Usually the environment is not prod (e.g. a 256gb RAM server with nice gpus and what not, which we use for DEV activity. It might or might not look like UAT / PROD. Or it might or might not look like other dev environments.) – lucid_dreamer May 10 '18 at 23:03
  • First, you're right: this set up does not work with what you're describing. I once had a similar set up where I was running docker inside a vagrant box (don't ask), and I was interested in getting Pycharm debug to work. I thought I was close to solving it, however, I gave up due to time constraints. At some point, you've got to get work done, right? – Marc May 11 '18 at 13:11
  • ahahah! yes. Those print statements come in handy! but a debugger in the IDE is handy too. I tried a bit of everything, I came to the conclusion that if you can install the ssh server in the container, you should do it. Which took me to the phusion base image, and its [ability to switch on and off ssh per container execution](https://archive.is/M8VLV#selection-4085.0-4089.32). – lucid_dreamer May 19 '18 at 23:22
  • Running docker in a vagrant box is perfectly legit by the way. Much better than running it with boot2docker, or windows hyperv. Fire up a proper linux distro virtual box with vagrant, setup a bridge network on that virtual machine so that it is a full class citizen of your local network, share a folder with your windows host, install docker in the virtual machine, redirect ports as needed == happy days. – lucid_dreamer May 19 '18 at 23:25
0

With Docker 1.3, use the exec command to construct the path to the Python interpreter:

sudo docker exec container_name /usr/bin/python

See https://docs.docker.com/reference/commandline/cli/#exec, http://forum.jetbrains.com/thread/PyCharm-2224

You could install SSH inside the container and then expose the port, but that isn't how containers are expected to be used, because you would be bloating them.

dukebody
  • 7,025
  • 3
  • 36
  • 61
  • Can you confirm, that `exec` for sure can be used to connect remote debugger in PyCharm? – trikoder_beta Dec 09 '14 at 12:58
  • I cannot confirm because I don't use PyCharm. Why don't you try it out? – dukebody Dec 09 '14 at 13:00
  • @dukebody, what IDE do you use for python development - if any? I wonder if sublime Text REPL or Python tools for Visual Studio depending on the ability to use docker exec - I guess I'll have to try it out to know for sure... – Vincent De Smet Jan 07 '15 at 09:36
  • has someone managed to get `docker exec` to work with PyCharm? In PyCharm I only see the option to select the path to a python interpreter. It does not accept an arbitrary command that will start an interpreter. – stefanfoulis Jan 15 '15 at 09:09
  • 2
    fyi http://forum.jetbrains.com/thread/PyCharm-2224 mentioned in this question is unanswered. I have not found a way to do this yet. – stefanfoulis Jan 19 '15 at 07:39
  • If you need to select a path to an Python interpreter in PyCharm you can always create a bash script that does `docker exec container_name /usr/bin/python`. – dukebody Jan 19 '15 at 12:49
0

I haven't tried this, but I would try creating a Bash script which calls docker exec ..., as in @Anto's answer.

Then, install the BashSupport extension. Now create a new run configuration which runs your script as a Bash script.

Community
  • 1
  • 1
taleinat
  • 8,441
  • 1
  • 30
  • 44
  • I don't use docker so setting this all up would require significant effort. There are several people here who apparently already have such setups; if none of them report whether this works, I'll try it myself. – taleinat Feb 24 '15 at 18:20
  • 1
    Hey, thanks for this answer. Perhaps it could do the trick, but it would also mean going without PyCharm's interpreter setup and hence everything that goes with it (integration with other packages, built-in debugging, etc)... Or did I get it wrong ? – Anto Feb 26 '15 at 15:10
0

You can get a bit crazy by installing Pycharm in the container and just running it from there. You'd have to do this by docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=:0.0 pycharm-image but it should work just fine. But remember that all of Pycharm and your source would be in that container as well. So save, commit, and push early and often.

J. Scott Elblein
  • 4,013
  • 15
  • 58
  • 94
grim
  • 760
  • 4
  • 13
  • It's no different than the process separation that's built into browsers nowadays, except that you can determine how much ram/cpu it uses. Which seems ideal when running Java based programs if you ask me. – grim Mar 02 '15 at 06:49
  • 2
    Why not just run pycharm in the container and mount your source directory in? – user2851943 Mar 25 '15 at 10:08
0

With PyCharm 5 they added support for docker. You must have your docker configured in docker-machine.

If you don't already use docker-machine you can connect to an existing machine using the generic machine engine and ssh into a vagrant VM or to localhost if you aren't running things in a VM. I didn't find a way around the ssh to localhost unfortunately.

I haven't found a way to mount volumes into the docker image they use, to share files with my dev tree, but it might be possible.

Dobes Vandermeer
  • 8,463
  • 5
  • 43
  • 46
  • Yo, I doubt you still are unable to mount volumes, but my answer does show how to do that in case you are wondering. – Marc Oct 25 '17 at 16:39