It is most certainly not, due to a very simple and fundamental reason: AUC scores (either of ROC or PR curves) actually give the performance of the model averaged over a whole range of thresholds; looking closely at the linked document, you'll notice the following regarding the PR AUC (emphasis in the original):
You can also think of PR AUC as the average of precision scores
calculated for each recall threshold. You can also adjust this
definition to suit your business needs by choosing/clipping recall
thresholds if needed.
and you may use PR AUC
when you want to choose the threshold that fits the business problem
The moment you choose any specific threshold (in precision, recall, F1 etc), you have left the realm of the AUC scores (ROC or PR) altogether - you are in a single point on the curve, and the average area under the curve is no longer useful (or even meaningful).
I have argued elsewhere why the AUC scores can be misleading, in the sense that most people think they give something else than what they actually give, i.e. the performance of the model over a whole range of thresholds, while what one is going to deploy (and is thus interested on its performance) will necessarily involve a specific threshold indeed.