I have short python script that is supposed to:
- be used by many users and threads concurrently,
- call another program (
/usr/bin/ld
) - call this other program not more than x times concurrently (e.g. 2 concurrent calls to
ld
) - handle being interrupted / killed
I managed to achieve most of this using shared semaphore from python module posix_ipc. It handles SIGTERM
and ctrl+c
- semaphore is released, but it doesn't handle SIGKILL
- semaphore stays acquired and needs to be reset manually. This means that doing kill -9
on it twice disables it permanently (until manual fix is applied).
How can I release semaphore when script is killed? If not possible - is there different method to achieve similar result?
I looked into file locks (with assumption that number of concurrent uses will always be 2) - maybe I can have 2 files, try to lock 1, if failed lock the other and wait until available. But I couldn't figure how to do "try to lock, if sb else already locked it, do sth else".
Full code of script:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import posix_ipc
import subprocess
import sys
import signal
SEM_NAME = '/serialize_ld'
MAX_CONCURRENT = 1
PROGRAM = '/usr/bin/ld'
def main():
import os
os.umask(0)
sem = posix_ipc.Semaphore(SEM_NAME, posix_ipc.O_CREAT, mode=0o666, initial_value=MAX_CONCURRENT)
sem.acquire()
def release_semaphore(signum, frame):
print("exiting due to signal " + str(signum))
sem.release()
sem.close()
sys.exit(1)
signal.signal(signal.SIGTERM | signal.SIGINT | signal.SIGKILL, release_semaphore)
try:
subprocess.call([PROGRAM, *sys.argv[1:]])
finally:
sem.release()
sem.close()
if __name__ == "__main__":
main()