If you cancel an operation you just hint that it is done, especially in long running tasks you have to implement the logic yourself. If you cancel something the dependencies will consider the task finished and run no problem.
So what you need to do is have some kind of a global synced variable that you set and get in a synced fashion and that should capture your logic. Your running operations should check that variable periodically and at critical points and exit themselves. Please don't use actual global but use some common variable that all processes can access - I presume you will be comfortable in implementing this?
Cancel is not a magic bullet that stop the operation from running, it is merely a hint to the scheduler that allows it to optimise stuff. Cancel you must do yourself.
This is explanation, I can give sample implementation of it but I think you are able to do that on your own looking at the code?
EDIT
If you have a lot of blocks that are dependent and execute sequentially you do not even need an operation queue or you only need a serial (1 operation at a time) queue. If the blocks execute sequentially but are very different then you need to rather work on the logic of NOT adding new blocks once the condition fails.
EDIT 2
Just some idea on how I suggest you tackle this. Of course detail matters but this is also a nice and direct way of doing it. This is sort of pseudo code so don't get lost in the syntax.
// Do it all in a class if possible, not subclass of NSOpQueue
class A
// Members
queue
// job1
synced state cancel1 // eg triggered by UI
synced state counter1
state calc1 that job 1 calculates (and job 2 needs)
synced state cancel2
synced state counter2
state calc2 that job 2 calculated (and job 3 needs)
...
start
start on queue
schedule job1.1 on (any) queue
periodically check cancel1 and exit
update calc1
when done or exit increase counter1
schedule job1.2 on (any) queue
same
schedule job1.3
same
wait on counter1 to reach 0
check cancel1 and exit early
// When you get here nothing has been cancelled and
// all you need for job2 is calculated and ready as
// state1 in the class.
// This is why state1 need not be synced as it is
// (potentially) written by job1 and read by job2
// so no concurrent access.
schedule job2.1 on (any) queue
and so on
This is to me most direct and ready for future development way of doing it. Easy to maintain and understand and so on.
EDIT 3
Reason I like and prefer this is because it keeps all your interdependent logic in one place and it is easy to later add to it or calibrate it if you need finer control.
Reason I prefer this to e.g. subclassing NSOp is that then you spread out this logic into a number of already complex subclasses and also you loose some control. Here you only schedule stuff after you've tested some condition and know that the next batch needs to run. In the alternative you schedule all at once and need additional logic in all subclasses to monitor progress of the task or state of the cancel so it mushrooms quickly.
Subclassing NSOp I'd do if the specific op that run in that subclass needs calibration, but to subclass it to manage the interdependencies adds complexity I recon.
(Probably final) EDIT 4
If you made it this far I am impressed. Now, looking at my proposed piece of (pseudo) code you might see that it is overkill and that you can simplify it considerably. This is because the way it is presented, the different components of the whole task, being task 1, task 2 and so on, appear to be disconnected. If that is the case there are indeed a number of different and simpler ways in which you can do this. In the reference I give a nice way of doing this if all the tasks are the same or very similar or if you have only a single subsubtask (e.g. 1.1) per subtask (e.g. 1) or only a single (sub or subsub) task running at any point in time.
However, for real problems, you will probably end up with much less of a clean and linear flow between these. In other words, after task 2 say you may kick of task 3.1 which is not required by task 4 or 5 but only needed by task 6. Then the cancel and exit early logic already becomes tricky and the reason I do not break this one up into smaller and simpler bits is really because like here the logic can (easily) also span those subtasks and because this class A
represents a bigger whole e.g. clean data or take pictures or whatever your big problem is that you try to solve.
Also, if you work on something that is really slow and you need to squeeze out performance, you can do that by figuring out the dependencies between the (sub and subsub) tasks and kick them off asap. This type of calibration is where (real life) problems that took way too long for the UI becomes doable as you can break them up and (non-linearly) piece them together in such a way that you can traverse them in a most efficient way.
I've had a few such a problems and, one in particular I am thinking know became extremely fragile and the logic difficult to follow, but this way I was able to bring the solution time down from an unacceptable more than a minute to just a few seconds and agreeable to the users.
(This time really almost the final) EDIT 5
Also, the way it is presented here, as you make progress in solving the problem, at those junctures between say task 1 and 2 or between 2 and 3, those are the places where you can update your UI with progress and parts of the full solution as it trickles in from all the various (sub and subsub) tasks.
(The end is coming) EDIT 6
If you work on a single core then, except for the interdependencies between tasks, the order in which you schedule all those sub and subsub tasks do not matter since execution is linear. The moment you have multiple cores you need to break the solution up into as small as possible subtasks and schedule the longer running ones asap for performance. The performance squeeze you get can be significant but comes at the cost of increasingly complex flow between all the small little subtasks and in the way in which you handle the cancel logic.