0

I'm trying to use multiple Kubernetes clouds and have just a single Jenkins. I was able to get both Kubernetes clusters registered as clouds and all the login checks work. When I run a build on the cluster where Jenkins is, it works fine, pulls the code, builds an image and all that.

However, when I change the label to my second cluster, that doesn't have any Jenkins there seems to never want to build there and always builds on the cluster that's local to Jenkins.

I'm missing something stupid I'm sure of it but don't see what it is.

Rico
  • 58,485
  • 12
  • 111
  • 141
Steve
  • 1
  • 1
  • 1
  • What are you using? The Jenkins Kubernetes plugin? – Rico Mar 06 '19 at 21:11
  • Hi, yes I'm using the jenkins kubernetes plugin. To get one Jenkins to talk to multiple Kubernetes clusters its a matter of configuring a second cloud and pointing your jenkinsfile to that second cloud. You'll also need to add the keys and such but it does work. – Steve Apr 25 '19 at 12:57

2 Answers2

0

What I did to make it work with two Kubernetes clusters and one Jenkins was I used a directive called: cloud '' like this:

stage('do something') {            
            agent { kubernetes {
                // CluserName is configured in Jenkins --> Manage Jenkins --> configure system in the Kubernetes plugin section
                cloud '<Clustername>'
                label "<TheLabelYouGave>"
                containerTemplate {
                name 'maven'
                image 'maven:3.3.9-jdk-8-alpine'
                ttyEnabled true
                command 'cat'
                }
            }
        }
        steps {
            script {
                echo "${pom.version} ======================================================="
            }                
        }             
}

You can see this in the examples located in the documentation of the Kubernetes plugin for Jenkins repo in GitHub: https://github.com/jenkinsci/kubernetes-plugin Ctrl + f and search for cloud 'kubernetes'

mfuxi
  • 181
  • 1
  • 2
  • 9
0

You might be missing the service account token from the other cluster. If you test the cluster connection on the Kubernetes config in Jenkins -> Manage Jenkins, what result do you get.

You may need to use:

kubectl describe sa <service account>

You will then see a token then copy that token use this command:

kubectl describe secret <token>

You can then copy that long token and put it in Jenkins credentials as kubernetes token or just token.

If you go to the cloud config in Jenkins, it will then give you "Success":

Jenkins cloud config result example

I'm sitting with a new issue that you might also face soon. I've spent about almost 2 working weeks on this.

There are 2 options, I can't give you a definitive answer as of yet, but I can give options and explanations.

CONTEXT

I have 2 kubernetes clusters called FS and TC. The Jenkins I am using runs on TC.

The slaves do deploy in FS from the TC Jenkins, however the slaves in FS would not connect to the Jenkins master in TC.

The slaves make use of a TCP connection that requires a HOST and PORT. However, the exposed jnlp service on TC is HTTP (http:/jenkins-jnlp.tc.com/).

Even if I use

  • HOST: jenkins-jnlp.tc.com
  • PORT: 80

It will still complain that it's getting serial data instead of binary data.

The complaint

For TC I made use of the local jnlp service HOST (jenkins-jnlp.svc.cluster.local) with PORT (50000). This works well for our current TC environment.

SOLUTIONS

Solution #1

A possible solution would involve having a HTTP to TCP relay container running between the slave and master on FS. It will then be linked up to the HTTP url in TC (http:/jenkins-jnlp.tc.com/), encapsulating the HTTP connection to TCP (localhost:50000) and vice versa.

The slaves on FS can then connect to the TC master using that TCP port being exposed from that container in the middle.

Diagram to understand better

Solution #2

People kept complaining and eventually someone made a new functionality to Jenkins around 20 Feb 2020. They introduced Websockets that can run over HTTP and convert it to TCP on the slave.

I did set it up, but it seems too new and is not working for me even though the slave on FS says it's connected, it's still not properly communicating with the Jenkins master on TC. It still sees the agent/slave pod as offline.

Here are the links I used

  1. Original post
  2. Update note on Jenkins
  3. Details on Jenkins WebSocket
  4. Jenkins inbound-agent github
  5. DockerHub jenkins-inbound-agent

CONCLUSION

After a lot of fiddling, research and banging my head on the wall, I think the only solution is solution #1. Problem with solution #1, a simple tool or service to encapsulate HTTP to TCP and back does not exist (that I know of, I searched for days). This means, I'll have to make one myself.

Solution #2 is still too new, zero to none docs to help me out or make setting it up easy and seems to come with some bugs. It seems the only way to fix these bugs would be to modify both Jenkins and the jnlp agent's code, which I have no idea where to even start.