I have two separate processes, a client and server process. They are linked using shared memory.
A client will begin his response by first altering a certain part of the shared memory to the input value and then flipping a bit indicating that the input is valid and that the value has not already been computed.
The server waits for a kill signal, or new data to come in. Right now the relevant server code looks like so:
while(!((*metadata)&SERVER_KILL)){
//while no kill signal
bool valid_client = ((*metadata)&CLIENT_REQUEST_VALID)==CLIENT_REQUEST_VALID;
bool not_already_finished = ((*metadata)&SERVER_RESPONSE_VALID)!=SERVER_RESPONSE_VALID;
if(valid_client & not_already_finished){
*int2 = sqrt(*int1);
*metadata = *metadata | SERVER_RESPONSE_VALID;
//place square root of input in memory, set
//metadata to indicate value has been found
}
}
The problem with this is that the while loop takes up too many resources.
Most solutions to this problem are usually with a multithreaded application in which case you can use condition variables and mutexes to control the progression of the server process. Since these are single threaded applications, this solution is not applicable. Is there a lightweight solution that allows for waiting for these memory locations to change all while not completely occupying a hardware thread?