This dockerfile has a spark path, retrieves the file name and installs the same. However, the script fails at the first echo command. It looks like the echo is being run as /bin/sh instead of /bin/sh -c.
How can I execute this echo command using /bin/sh -c? Is this the correct way to implement it, I'm planning on using the same logic for other installations such as Mongo, Node etc.
FROM ubuntu:18.04
ARG SPARK_FILE_LOCATION="http://www.us.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz"
CHAR_COUNT=`echo "${SPARK_FILE_LOCATION}" | awk -F"${DELIMITER}" '{print NF-1}'`
RUN echo $CHAR_COUNT
RUN CHAR_COUNT=`expr $CHAR_COUNT + 1`
RUN SPARK_FILE_NAME=`echo ${SPARK_FILE_LOCATION} | cut -f${CHAR_COUNT} -d"/"`
RUN Dir_name=`tar -tzf $SPARK_FILE_NAME | head -1 | cut -f1 -d"/"`
RUN echo Dir_name
/bin/sh: 1: 'echo http://www.us.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz | awk -F/ "{print NF-1}"': not found