0

I created a .Net core application with Linux docker support using Visual Studio 2017 on a Windows 10 PC with Docker for Windows installed. I can use the following command to run it (a console application)

docker run MyApp

I have another Linux machine with Docker installed. How to publish the .Net core application to the Linux machine? I need to publish and run the dockerized application on the Linux machine.

The linux has the following docker packages installed.

$ sudo yum list installed "*docker*"
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
docker-engine.x86_64                                17.05.0.ce-1.el7.centos                         @dockerrepo
docker-engine-selinux.noarch                        17.05.0.ce-1.el7.centos                         @dockerrepo
ca9163d9
  • 27,283
  • 64
  • 210
  • 413
  • Is there a reason why you cannot use the docker hub? – Yamuk Aug 14 '18 at 17:47
  • We just started to use docker and need some quick demo. – ca9163d9 Aug 14 '18 at 18:03
  • O god, we tried too, but sadly there is no "quick demo", you basically have to create the entire CI infrastructure, some "demo" would be basically the working solution, remember, while docker makes many things easier, is not magic, with no registry container there is no other way than: connect to your server, make a pull from the repo, rebuild the image and then run it, you can create scripts to make it easier, but the same steps are always required – rekiem87 Aug 14 '18 at 18:07
  • 1
    I found this question https://stackoverflow.com/q/23935141/825920 – ca9163d9 Aug 14 '18 at 18:10
  • 1
    I strongly reccommend using docker hub, it gives you one private repo for free and unlimited public repos. This way you can push to docker hub, and pull from there onto the docker machine – Yamuk Aug 15 '18 at 15:13

1 Answers1

6

There are many ways to do this, just search for any tool for CI/CD.

The easiest way to do it is manually, connect to your Linux server, make a git pull of the code and then run the same commands that you run locally.

Other option is to do a push of your docker image to a container registry, then do a pull in you docker server and you are ready to go

Edit:

You should really take a look to some CI service, for example, in our environment, we use GitLab, when we do a push to master there is a gitlab.yml that builds the project, then do a push:

image: docker:latest
services:
- docker:dind

stages:
- build

api:
  variables:
    IMAGE_NAME: git.lagersoft.com:4567/gumbo/vtae/api:${CI_BUILD_REF}
  stage: build
  only:
    - master
  script:
    - docker build -t ${IMAGE_NAME} -f vtae.api/Dockerfile .
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN ${IMAGE_NAME}
    - docker push ${IMAGE_NAME}

With this we only need to do a pull in our server with the latest version.

It's worth noticing that docker by itself does not handle the publication part, so you need to do it manually or with some tool (any CI tool like gitlab, jenkins, circleci, amazon code pipeline...) if you are starting learning I would recommend to start manually and then integrate some CI tool.

Edit 2

About the Visual Studio tool, I would not recommend to use it for anything else than local development, since yeah, it only works in windows and it only works in visual studio (Rider has integrated just very recently), so, to do the deploy in a linux environment we use our own docker and docker compose files, they are based in the defaults anyway, they are something like this:

FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80

FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY lagersoft.common/lagersoft.common.csproj lagersoft.common/
COPY vtae.redirect/vtae.redirect.csproj vtae.redirect/
COPY vtae.data/vtae.data.csproj vtae.data/
COPY vtae.common/vtae.common.csproj vtae.common/
RUN dotnet restore vtae.redirect/vtae.redirect.csproj
COPY . .
WORKDIR /src/vtae.redirect
RUN dotnet build vtae.redirect.csproj -c Release -o /app

FROM build AS publish
RUN dotnet publish vtae.redirect.csproj -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "vtae.redirect.dll"]

This docker file copy all the related projects (I hate the copying part, but is the same as Microsoft do their default file), the do the build and publish the app, on the other hand we have a docker-compose to add some services (this files must be in the solution folder to access all the related projects):

version: '3.4'

services:  
  vtae.redirect.redis:
    image: redis
    volumes:
      - "./volumes/redirect/redis/data:/data"
    container_name: vtae.redirect.redis

  vtae.redirect:
    image: vtae.redirect
    depends_on:
      - vtae.redirect.redis
    build:
      context: .
      dockerfile: vtae.redirect/Dockerfile
    ports: 
      - "8080:80"
    volumes:
      - "./volumes/redirect/data:/data"
    container_name: vtae.redirect
    entrypoint: dotnet /app/vtae.redirect.dll

With this parts there is only left to do a commit, then a pull in the server and run the docker-compose up command to run our app (you could do it from the docker file directly, but it is easier and more manageable with docker compose.

Edit 3

To make the deployment in the server we use two tools.

  • First the gitlab ci is run after the commit is done
  • It makes the build specified in the docker file and pushes it to our Gitlab container registry, same if it was the container registry of amazon, google, azure... etc...
  • Then it makes a post request to the server in production, this server is running a special tool in a separate port
  • The server receive the post request and validates it, for this we use this tool (a friend is the repo owner)
  • The script receive the request, check the login, and if it is valid, then it simply does the pull from our gitlab container registry and run docker-compose up

Notes

The tool is not perfect, we are moving from just docker to use kubernetes, were you can connect to your cluster directly from your machine or some CI integration and do the deploys directly, no matter what solution do you choose, i recommend you that start to see how kubernetes can help you, sadly is one more layer to learn, but it is very promising, were you will be able to publish to almos any cloud or metal painless, with fallbacks, scaling and other stuff.

Also If you do not want or can not use the container registry (I strongly recommend this way), you can use the same tool, in the .sh that executes it, just do a git pull and then a docker build or docker compose. The most simple scenario could be to create an script yourself where you do ssh to the server, upload the files as zip and then run it in the server, remember, Ubuntu is in the microsoft store and could run this script, but the other solutions are more "independient" and scalable, so, make your choose!

kayess
  • 3,384
  • 9
  • 28
  • 45
rekiem87
  • 1,565
  • 18
  • 33
  • The application was built using the Windows only tool - Visual-Studio and it's not available on Linux. For another option, I don't have any container registry. – ca9163d9 Aug 13 '18 at 21:55
  • Added my comments about the microsoft tool, it only adds debug capabilities so soy are not losing anything in production – rekiem87 Aug 13 '18 at 22:08
  • Curiously, how the CI service copy the image to the production server after building the image? Right now I need to figure out a way to run the image on the Linux server. – ca9163d9 Aug 14 '18 at 12:37
  • Edited my comment to add the tool that we use in the server to update the application once that is in the container registry – rekiem87 Aug 14 '18 at 17:21
  • 1
    +1 for setting up a docker registry. It's worth it, and the best way to have the _same_ image used in multiple locations. You can run a registry easily as a container: https://docs.docker.com/registry/deploying/ – RQDQ Aug 15 '18 at 13:20