Your question is very broad.
There are a number of options available, and the best for you depends on your specific needs and environment. You need to provide more details about the kind of data you want your main thread to update on the runner thread. How often are the updates expected? You've stated that the runner thread is time-critical - which means that you want to keeping locking (PS: The main source of slowdowns in multi threaded applications is when different threads compete for the same locks. Referred to as lock contention.) to a barest minimum. (As I said: It depends on your needs.)
PS: Must all data set by the main thread be used? Or is it acceptable for the runner thread to simply use the latest available data ignoring however many intermediate values may have been assigned? The answer to this question yields fundamentally different options.
For example atomic updates can be performed without any locking. Look at TThread.Terminated
. It's a simple value. You do not need any locks to update this particular value. There are no problematic race conditions because the processor will read or write the value as an entire atomic unit.
Even if you're updating at the same time your while not Terminated
loop is reading the value - there won't be a problem. The update will either happen just before the read (resulting in the thread exiting the loop) or it will happen just after (resulting in one more iteration before the loop exits at the next read).
PS: It's important to be aware that setting string values is not an atomic operation.
Now you've indicated the main thread doesn't need to read the runner thread's data at all. But I'll use the possibility as a contrasting example. If you needed to increment an integer value on the runner thread, that would need protection. This is because incrementing a value is a multi-step operation:
- Read the value.
- Perform a calculation on the value that was read.
- Write a new value.
If the runner thread an main thread are using the value at the same time, you may get inconsistent results.
Another situation that can cause problems is where data is made up of a number of values that interact with each other. E.g. NoOfUnits
and MassPerUnit
are used in combination to determine TotalMass
. Updating these values independently can result in race conditions causing inconsistent behaviour.
Silver Warrior's answer provides a technique to protect multiple values. Though be aware that there are some serious errors in the current version of that answer.
Note if you keep your data encapsulated in a separate object, it is posisble to update your runner thread's data without any locks because you can update a pointer value atomically. (VERY NB: There are a number of special rules you'd have to follow, and you'd need to figure out how to avoid memory leaks... But that's detail for a more specific question.)
Another option is to implement your runner thread as a message queue. I.e. when the main thread wants to change values, it sends a message to the runner thread. The runner will only process the changed values instruction when it's "safe" to do so. (Again the feasability of this depends on your specific requirements.)
And as a final note there are some additional concerns over and above protecting data from race conditions. Exactly how time critical is your runner thread? How much processing does it do? Does it need to respond quickly to certain events? If so, what events?
Answers to these questions are important in understanding the ideal structure of your runner thread's main loop. For example, a "busy loop" (a loop that might iterate without doing anything just to ensure it never pauses) would make the thread highly responsive, but starve the machine of resources and slow it down as a whole. By comparison, message queues would typically run a loop processing messages until there are none left, then put the thread into a "wait-state" until the next message is received.
PS: Another potential source of contention and slow-downs is the memory manager. If both your main and runner threads perform a significant number of heap allocations/deallocations, you may get lock contention in areas you didn't even explicitly code.