There are two approaches that come to mind depending on the architecture that makes the most sense for you. They both have their pros and cons depending on your requirements so use your best judgement.
One approach (that it sounds like you're already considering) is starting a Python runtime from within Java. As @Leo Leontev mentioned, this approach has an answer you can find here. The pros of this approach is that you don't need any extra infrastructure. The cons are that you'll need to package a (potentially large) model with your app, running two runtimes at once is probably not great for performance or battery life, and your start-up time could take a hit when loading the model.
Another approach would be creating a separate Python web server that your app can make requests to as necessary. This could be a simple REST API with whatever endpoints you need. If you're making and hosting your own model, this can speed up your app since you can persist the model in memory rather than loading it every time a user starts your app. One pro to this approach is that it's extensible (you can always build more endpoints into your API including non-ML ones). If your model is non-generic and you want to protect it from being copied, this also has added security benefits since users won't have access to the model itself.
For most use-cases, I'd recommend the second approach.