Environment: Spark 1.6.2; Linux 2.6.x (Red Hat 4.4.x); Hadoop 2.4.x.
I launched a job this morning through spark-submit
but do not see the files it was supposed to write. I've read a bit about the web UI for monitoring spark jobs, but at this point, my only visibility into what is happening on the Hadoop cluster and HDFS is through a bash-shell terminal.
Question: what are the standard ways from the command-line to get a quick readout on spark jobs, and any log trail they might leave behind (during or after job execution)?
Thanks.