To join two dataframes in Spark you will need to use a common column which exists on both dataframes and since you don't have one you need to create it. Since version 1.6.0 Spark supports this functionality through monotonically_increasing_id() function. The next code illustrates this case:
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq("a","b","c","d","e")
.toDF("val1")
.withColumn("id", monotonically_increasing_id)
val df2 = Seq(1, 2, 3, 4, 5)
.toDF("val2")
.withColumn("id", monotonically_increasing_id)
df.join(df2, "id").select($"val1", $"val2").show(false)
Output:
+----+----+
|val1|val2|
+----+----+
|a |1 |
|b |2 |
|c |3 |
|d |4 |
|e |5 |
+----+----+
Good luck