That link is overkill.
If you're trying to run Mahout on Spark in a REPL environment, all you should need to do is set some env variables.
Have you set SPARK_HOME? (try echo $SPARK_HOME
- i think that works on windows?)
The other approach would be to use Apache Zeppelin, which imho is a much nicer experience, to work with. Tutorial
I haven't heard of anyone doing Mahout on Windows, but it should be straight forward. If / when you get it working- please write a tutorial and we'll post it on the website (I'm a community member), we can help you out, pls reach out on the developer email list
Update
If you're having trouble running bin/mahout
you can either install Cygwin (thus creating a Unix like environment, OR you can try the following:
export MAHOUT_JARS=$(echo "$MAHOUT_HOME"/*.jar | tr ' ' ',')
$SPARK_HOME/bin/spark-shell --jars "$MAHOUT_JARS" \
-i $MAHOUT_HOME/bin/load-shell.scala \
--conf spark.kryo.referenceTracking=false \
--conf spark.kryo.registrator=org.apache.mahout.sparkbindings.io.MahoutKryoRegistrator \
--conf spark.kryoserializer.buffer=32k \
--conf spark.kryoserializer.buffer.max=600m \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer
Which should start the spark-shell with the Mahout Jars/proper spark config, and mahout startup script (which imports libraries and sets up the Mahout distributed context)- but personally, I'd recommend Zeppelin (see tutorial link above).