Saurabh, in order to read in parallel using the standard Spark JDBC data source support you need indeed to use the numPartitions option as you supposed.
But you need to give Spark some clue how to split the reading SQL statements into multiple parallel ones. So you need some sort of integer partitioning column where you have a definitive max and min value.
If your DB2 system is MPP partitioned there is an implicit partitioning already existing and you can in fact leverage that fact and read each DB2 database partition in parallel:
var df = spark.read.
format("jdbc").
option("url", "jdbc:db2://<DB2 server>:<DB2 port>/<dbname>").
option("user", "<username>").
option("password", "<password>").
option("dbtable", "<your table>").
option("partitionColumn", "DBPARTITIONNUM(<a column name>)").
option("lowerBound", "<lowest partition number>").
option("upperBound", "<largest partition number>").
option("numPartitions", "<number of partitions>").
load()
So as you can see the DBPARTITIONNUM() function is the partitioning key here.
Just in case you don't know the partitioning of your DB2 MPP system, here is how you can find it out with SQL:
SELECT min(member_number), max(member_number), count(member_number)
FROM TABLE(SYSPROC.DB_MEMBERS())
In case you use multiple partition groups and different tables could be distributed on different set of partitions you can use this SQL to figure out the list of partitions per table:
SELECT t2.DBPARTITIONNUM, t3.HOST_NAME
FROM SYSCAT.TABLESPACES as t1, SYSCAT.DBPARTITIONGROUPDEF as t2,
SYSCAT.TABLES t4, TABLE(SYSPROC.DB_MEMBERS()) as t3
WHERE t1.TBSPACEID = t4.TBSPACEID AND
t4.TABSCHEMA='<myschema>' AND
t4.TABNAME='<mytab>' AND
t1.DBPGNAME = t2.DBPGNAME AND
t2.DBPARTITIONNUM = t3.PARTITION_NUMBER;