0

I have several scripts on my local machine. These scripts run install and configuration commands to setup my Elasticsearch nodes. I have 15 nodes coming and we definitely do not want to do that by hand.

For now, let's call them Script_A, Script_B, Script_C and Script_D.

Script_A will be the one to initiate the procces, it currently contains:

#!/bin/bash

read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"

ssh root@$hostname

This works fine obviously and I can get into any server I need to. My confusion is running the other scripts remotely. I have read few other articles/SO questions but I'm just not understanding the methodology.

I will have a directory on my machine as follows:

Elasticsearch_Installation
|
|=> Scripts
    |
    |=> Script_A, Script_B, etc..

Can I run the Script_A, which remotes into the server, then come back to my local and run Script_B and so on within the remote server without moving the files over?

Please let me know if any of this needs to be clarified, I'm fairly new to the Linux environment in general.. much less running remote installs from scripts over the network.

NTWorthy
  • 153
  • 1
  • 10
  • 1
    Just an idea that is not directly related to your question: Have you thought about using one of the popular tools for automating your task instead of building everything by hand? SaltStack, Chef, Ansible, Puppet, etc would maybe make your task easier. – pdu Mar 04 '20 at 13:52
  • If you're looking at scaling up to manage many nodes, then you might want to look into a configuration management system, such as Puppet or Ansible. But although that would probably be a win in the long run, it would be considerably more work up front. – John Bollinger Mar 04 '20 at 13:53
  • Ah I didn't realize those tools existed. I'll definitely look into those. Would Airflow or Autosys fall into those categories as well? I know we use those two but I don't know enough about them myself to know if they would also work. – NTWorthy Mar 04 '20 at 13:56
  • I am not very familiar with either of those products, but a bit of quick research suggests that no, they are not in the same category. – John Bollinger Mar 04 '20 at 13:59

3 Answers3

2

Yes you can. Use ssh in non interactive mode, it will be like launching a command in your local environment.

ssh root@$hostname /remote/path/to/script

Nothing will be changed in your local system, you will be at the same point where you launched the ssh command.

NB: this command will ask you a password, if you want a really non interactive flow, set up host a passwordless login, like explained here How to ssh to localhost without password?

Francesco Gasparetto
  • 1,819
  • 16
  • 20
1

You have a larger problem than just setting up many nodes: you have to be concerned with ongoing maintenance and administration of all those nodes, too. This is the space in which configuration management systems such as Puppet, Ansible, and others operate. But these have a learning curve to overcome, and they require some infrastructure of their own. You would probably benefit from one of them in the medium-to-long term, but if your new nodes are coming next week(ish) then you probably want a solution that you can use immediately to get up and going.

Certainly you can ssh into the server to run commands there, including non-interactively.

My confusion is running the other scripts remotely.

Of course, if you want to run your own scripts on the remote machine then they have to be present there, first. But this is not a major problem, for if you have ssh then you also have scp, the secure copy program. It can copy files to the remote machine, something like this:

#!/bin/bash

read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"

scp Script_[ABCD] root@${hostname}:./
ssh root@hostname ./Script_A
John Bollinger
  • 160,171
  • 8
  • 81
  • 157
  • Holy crap how did I not think about scp... My lack of Linux us showing! Anyways, you make a good point there about maintenance. We have a few systems in place that will maintain the ES nodes later, so we should be fine there. I'm just in charge of getting them set up. I do appreciate the warning though! Running the scp way might be best though because then I can also leave evidence of how the node was setup and such. – NTWorthy Mar 04 '20 at 14:16
0

I also manage Elasticsearch clusters with multiple nodes. A hack that works for me is using Terminator Terminal Emulator and split it into multiple windows/panes, one for each ES node. Then you can broadcast the commands you type in one window into all the windows.

This way, you run commands & view their results almost interactively across all nodes parallely. You could also save this layout of windows in Terminator, and then you can get this view quickly using a shortcut.

PS, this approach will only work of you have only small number of nodes & that too for small tasks only. The only thing that will scale with the number of nodes & the number of times and variety of tasks you need to perform will probably be a config management solution like Puppet or Salt.

Fabric is another interesting project that may be relevant to your use case.

Abhishek Jaisingh
  • 1,614
  • 1
  • 13
  • 23
  • Ah Terminator sounds pretty useful. After discussing further with my team, I think we are going the Ansible route.. but I will definitely keep Terminator in mind for future use! – NTWorthy Mar 04 '20 at 18:04