5

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.

I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.

I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.

My firewall is setup to accept inbound connection to 1433 port.

My SQL Server database is configured to allow remote access.

Here is the connection string I use:

"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"

And here is the file I use to deploy my web application:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: <private_repo_url>/webapp:db
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        - containerPort: 443
        - containerPort: 1433
      imagePullSecrets:
      - name: gitlab-auth
      volumes:
      - name: secrets
        secret:
          secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  labels:
    app: webapp
spec:
  type: NodePort
  selector:
    app: webapp  
  ports:
  - name: port-80
    port: 80
    targetPort: 80
    nodePort: 30080
  - name: port-443
    port: 443
    targetPort: 443
    nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
  name: sql-server
  labels:
    app: webapp
spec:
  ports:
    - name: port-1433
      port: 1433
      targetPort: 1433
---
apiVersion: v1  
kind: Endpoints  
metadata: 
  name: sql-server
  labels:
    app: webapp
subsets: 
  - addresses: 
    - ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
    ports: 
      - port: 1433

So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:

> kubectl describe svc webapp
Name:                     webapp
Namespace:                default
Labels:                   app=webapp
Annotations:              <none>
Selector:                 app=webapp
Type:                     NodePort
IP:                       10.108.225.112
Port:                     port-80  80/TCP
TargetPort:               80/TCP
NodePort:                 port-80  30080/TCP
Endpoints:                172.17.0.4:80
Port:                     port-443  443/TCP
TargetPort:               443/TCP
NodePort:                 port-443  30443/TCP
Endpoints:                172.17.0.4:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

> kubectl describe svc sql-server
Name:              sql-server
Namespace:         default
Labels:            app=webapp
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.107.142.32
Port:              port-1433  1433/TCP
TargetPort:        1433/TCP
Endpoints:
Session Affinity:  None
Events:            <none>

> kubectl describe endpoints webapp
Name:         webapp
Namespace:    default
Labels:       app=webapp
Annotations:  <none>
Subsets:
  Addresses:          172.17.0.4
  NotReadyAddresses:  <none>
  Ports:
    Name      Port  Protocol
    ----      ----  --------
    port-443  443   TCP
    port-80   80    TCP

Events:  <none>

> kubectl describe endpoints sql-server
Name:         sql-server
Namespace:    default
Labels:       app=webapp
Annotations:  <none>
Subsets:
  Addresses:          172.24.144.1
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  1433  TCP

Events:  <none>

I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:

SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)

I am new with Kubernetes and I am not very comfortable with networking so any help is welcome. The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...

Thank you!

Bliamoh
  • 73
  • 1
  • 7
  • When passing the instance's name, you only need one slash, and you don't need to pass the IP address, you can use `.`, which represents localhost; so replace `172.24.144.1\\MyServer` with `.\MyServer`. I suspect that is the problem here. – Thom A Dec 27 '18 at 12:20
  • The double slash is to escape the special character in C#. Still gave it a try by using `@"Server=172.24.144.1\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"` but it failed too. If I use `.\MyServer` it works when my web application runs locally but not when it is deployed in a Docker service or in a Kubernetes cluster since the SQL Serve database is not in the same network. I may not have been clear enough in my original question, I'll edit it. – Bliamoh Dec 27 '18 at 12:43
  • I would suggest it's not clear, no, as your question title specifically states *"Can not connect to SQL Server database **hosted on localhost from Kubernetes**, how can I debug this?*. Clearly it is **not** hosted on the localhost. There are 100's of question on SO, and Super User, about `A network-related or instance-specific error occurred while establishing a connection to SQL Server`, have you read any of them? Likely they will provide the solution. I *assume* you have configures remote connections as well? – Thom A Dec 27 '18 at 12:47
  • Also just noticed you said (in the comment) *"database is not in the same network"*; so is there no connectivity between these 2 networks? If so, how *are* you expecting the application to connect to a SQL Server instance that is on a network that it has no connectivity to? – Thom A Dec 27 '18 at 13:02

1 Answers1

6

What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.

subsets: 
  - addresses: 
    - ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
    ports: 
      - port: 1433

You can connect via the DNS entry host.docker.internal Read more here and here for windows

I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.

If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.

PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Stefan Georgiev
  • 170
  • 1
  • 4
  • I tried but what happened (and what you supposed) is that the DNS entry __host.docker.internal__ can only be used with Kubernetes for Docker and not Kubernetes with Minikube. At the end, I want to deploy my web application in a cloud provider and this web app should be able to access a database hosted on my own physical server. Since it is two different networks, I need to configure my cluster to access the physical server network but I'm very bad at it and I don't really know how to do it... I will maybe try to solve my problem by using Kubernetes for Docker for a short term solution. – Bliamoh Dec 27 '18 at 16:26
  • Minicube is unlikely to get continual support since docker itself now allows for native kubernetes support. You can run both of them simultaneously, so that you do not need to remove minicube or anything. Just change the context in kubectl You can see still check the IP address of the host that is used in the internal kubernetes. I usually just run ipconfig on my host and work it out from there (on windows). That would work for you until you restart – Stefan Georgiev Dec 27 '18 at 17:06
  • I tried Kubernetes that comes with Docker and it worked! What I don't understand now is that I can change the connection string with my private IP, the value of __host.docker.internal__ or any of my virtual switches it will work... I don't even need anymore the sql-server service and it's endpoints shown in the original question... However I will stay with this setup for now since it seems that my use case will need different configurations with Docker, Minikube or in production. Thanks for your advice! – Bliamoh Dec 28 '18 at 13:12
  • The right approach is to use a "service" described and handle the connection to it in the environment. This way first, the setup of all your service does not need to change if you decide on moving your SQL/Logging/Other service.Why it works with your internal IP, I cannot say (perhaps it is not your internal IP and the gateway docker uses?) I am glad I could help, if you mark the answer as solution, other people would know it is worth to try looking into it so that please do so if you feel like it. – Stefan Georgiev Dec 28 '18 at 13:17