So I have a situation where I have an Flask endpoint A, endpoint B and two other scripts foo.py
and bar.py
.
When I call endpoint A, I will do a call for foo.py
with Popen and store its PID.
On foo.py
, it makes a call to bar.py
using Popen, which makes another call, again, using Popen. The process opened on bar.py
is a server (to be more specific, it's a tf-serving server), which will be hanging forever when I do an p.wait(). Later on, I would like to use endpoint B to end the whole process triggered by A.
The situation can be something like:
Flask's endpoints:
import os
import json
import signal
from subprocess import Popen
from flask import current_app
from flask import request, jsonify
@app.route('/A', methods=['GET'])
def a():
p = Popen(['python', '-u','./foo.py'])
current_app.config['FOO_PID'] = p.pid
return jsonify({'message': 'Started successfully'}), 200
@inspection.route('/B', methods=['GET'])
def b():
os.kill(current_app.config['FOO_PID'], signal.SIGTERM)
return jsonify({'message': 'Stopped successfully'}), 200
foo.py:
p = Popen(['python' ,'-u', './bar.py', '--serve'])
while True:
continue
bar.py:
command = f'tensorflow_model_server --rest_api_port=8501 --model_name=obj_det --model_base_path=./model'
p = subprocess.Popen(command, shell=True, stderr=sys.stderr, stdout=sys.stdout)
p.wait()
Unfortunately when I kill foo.py
using endpoint B, the process created by bar.py
(ie. the server) will not end. How can I kill the server?
Please consider a solution that is OS agnostic.