I'm using the libx264 library to compress video data to... x264.
I used the default to have the library create as many (few) threads as it wants:
param.i_threads = X264_THREADS_AUTO;
This works great on my server which has 64 processors (2 CPUs with 16 cores each and Intel Threading). It will actually use about 5 threads.
However, on the embedded computer running the software, I only have 4 CPUs. It is a Xeon so there is not very many issues there, but somehow it prevents the USB port from functioning. We're receiving data from that USB port and when the 4 CPUs are used at about 100%, the libx264 code takes over the whole computer pretty bad.
I'm thinking of two solutions, use 3 as the maximum number of threads:
param.i_threads = 3;
or have those libx264 thread have a (much) higher nice value so the other things running on that computer don't get blocked (i.e. the CPU is better shared; the other things do not use much CPU, it's usually well under 10%).
However, I do not have control of how the libx264 library creates the threads and was wondering whether it would work for me to change the nice value before calling the libx264 functions that create the threads and as a result have those threads use that nice value, something like this:
nice(10);
...call libx264 functions...
nice(0);
The will that make those threads use a nice value of +10? From what I can see in the pthread_create()
man page, it doesn't clearly say that a thread inherit the parent's thread nice value...
Note 1: I'm aware that the issue is not unlikely the fact that the USB port is probably fighting for the DMA against the video capture card... if that is the case, we'll obviously not resolve any problem just by changing the priority of processes. I'd like to try that soft solution first.
Although I can move the USB port to another computer, the data would come through the network which could very well have a similar hardware conflict issue.
Note 2: I don't want to have to recompile the libx264 and change that code. That's way outside my project scope.