Your problem is one of scope.
You probably haven't given any thought to this but AddJob
is an instance method defined on a class. IIS handles the HTTP request by instantiating an object and calling the method. The child thread on which the Task runs is killed when the instance is disposed, because background threads are killed when all foreground threads of their owner are terminated. This is why your task starts but doesn't end.
If you want the Task to survive the object handling the request then you could make the task and its lifecycle management static. Of course that would not suit a server accepting any number of potentially concurrent requests, so the static Task would have to be a collection of Task into which you put the task object. We just introduced concurrency issues so you will need a thread-safe queue.
As soon as you start doing this sort of thing you take on responsibility for the object lifecycle, because it won't be garbage collected until you remove it from the collection.
You need a background process that periodically checks the time in queue for each of these objects and when they reach the required age the process should de-queue them and do whatever is supposed to happen when they reach the required age. This means you need to record the age of each task. You dequeue each task, check whether it's ripe and either process it or re-queue it.
Frankly I wouldn't use a Task object, I would create a class with properties for the housekeeping details and method implementing the behaviours. This is a combination of the Memento and Command design patterns.
As mentioned in another answer in a robust solution your tasks will survive server restarts. You can achieve this using Memento/Command and a persistent message queue in place of the memory queue. On Windows MSMQ is available for free. An advantage of this way is MSMQ takes over responsibility for thread safety in queue management.
To use an external message queue you will need to learn about (de)serialisation. Another answer uses a database server rather than a message queue to persist the serialised messages and this does work but it does not scale well. Purpose-built message queues rely a bunch of assumptions that can't be made in a general purpose database engine and this allows them to handle unplanned outages much more robustly and handle much higher levels of concurrency (or stress your server less for a given level of traffic).