I understand that thread locks can be beneficial in some cases, but in others, they make no sense. The case I am talking about is shared data. Suppose you have an array, and you want each thread to sort a random range within the array, and then update the array in that range with the sorted values, and keep doing that until the whole array is sorted (not practical, this is actually a school project I need to do, but it also serves as an example for what I don't understand).
array = {9, 7, 5, 3, 2, 4, 1}
// thread 1 sorts indices 1-3
// thread 2 sorts indices 3-6
thread1_result = {3, 5, 7}
tread2_result = {1, 2, 3, 4}
//update array
array = {9, 3, 5, 1, 2, 3, 4}
The above pseudo code is what would happen if two threads read from the array and sorted their respective ranges at the same time. Since they are reading from the same version of the array, they do not take into account the changes the other array is about to make. The results in the 7 being lost, and the 3 being duplicated.
The way to prevent this is by locking the array so only a single thread can read from the array, sort their given range, and then update the array at a time. Here's my big problem: doesn't that completely nullify the reason for using threads? Locking the array so that the threads take turn operating on it turns the program from a multithreaded solution to a sequential solution, because the again, the array is only being interacted with by only one thread at any given time. What's the point using threads? Is there a solution I am not seeing where multiple threads can work on the array at the same time without data being lost?