I have some problems to understand the logical architecture in which I develop with Scala/Spark-shell and Hadoop environment.
For better describe the logical architecture, I drew a small schema:
As the figure shows, I have Eclipse installated on my personal PC, and I would like to run scala script from my PC to Hadoop in remote mode.
Now I have the VPN connection, and I can process my scala program with PUtty from the shell. In practice, every time that I have to launch a Scala script, I transfer the file .scala
from my pc to remote machine with WinSCP, so I lanch the program directly from the remote machine. Every time I have to tranfer the file making me work wasteful.
Now the question: is there a way to launch the script from my personal PC to remote cluster, without pass through to the PUtty?