The major point is already covered in Alex's answer.
I just wanted to add an example,
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[4]").appName("Test-JDBC").getOrCreate()
ds = spark.read.jdbc("jdbc:mysql://localhost:3306/stackexchange", "(select min(id), max(id) from post_history) as ph",
properties={"user": "devender", "password": "*****", "driver": "com.mysql.jdbc.Driver"})
r = ds.head()
minId = r[0]
maxId = r[1]
ds = spark.read.jdbc("jdbc:mysql://localhost:3306/stackexchange", "(select * from post_history) as ph",
properties={"user": "devender", "password": "*****", "driver": "com.mysql.jdbc.Driver"},
numPartitions=4, column="id", lowerBound=minId, upperBound=maxId)
count = ds.count()
print(count)
For more details, https://gist.github.com/devender-yadav/5c4328918602b7910ba883e18b68fd87
Note: Sqoop automatically executes boundary query to fetch MIN, MAX value for split by column (that query can also be overridden)