1

This question is a bit academic, inspired by a misunderstanding of how an API actually works, but I'm curious how I'd be able to resolve the issue in a good way if I had understood the API correctly.

Let's say we need to integrate with a service with an OAuth-like method of authenticating. You call one server to get a token. You then use this token to request data from other endpoints. Here is the kicker: Each token can only be used once and will be cached by the remote server until it either expires or is consumed. If you request a token again and there is a cached token, you will receive the same token.

Now, let's say you have multiple processes which may happen concurrently which need to integrate with this service. They each request a token for using with the other endpoints. It won't matter if you've cached this token or not, they will all get the same token. Now you have a race condition where only one process will succeed and the others will fail since the token is only good for one time use.

A naïve solution would be for each process until it succeeds, but this would be inefficient and in a worse case scenario, an "unlucky" process might take forever because it always loses the race.

I'm thinking a more efficient process would submit some function to a service which would be responsible for requesting the token, handling each request in some sort of queue, and then passing it back to the consumer which might then await this response.

The I can imagine the function might look something like:

    public async Task<T> DoWithToken<T>(Func<Token, T> doSomething)
    {
        TryAsync<Token> tokenAttempt = _authorizationClient.TryAuthorize();
        T? result = default;
        _ = await tokenAttempt.Match(

            Succ: token =>
                result = doSomething(token),

            Fail: exception =>
                Console.Write($"Something went wrong: {exception.Message}")).ConfigureAwait(false);
        
        return result!;
    }

... but this would not control against concurrent processes also trying to use the token at the same time.

How could I do that?

Brian Kessler
  • 2,187
  • 6
  • 28
  • 58
  • _It won't matter if you've cached this token or not, they will all get the same token_ I don't really understand that part, two _different_ services will receive two different tokens, no? – devsmn Dec 16 '21 at 13:18
  • In this hypothetical scenario, if the same integration user requests tokens, he/she/it will get exactly the same token until either the token is consumed or expires 12 hours later. – Brian Kessler Dec 16 '21 at 13:22
  • 1
    Is it acceptable to serialize all requests to the service? Where only one request could be in flight at any moment? – Theodor Zoulias Dec 16 '21 at 14:00
  • 1
    "Each token can only be used once and will be cached by the remote server until it either expires or is consumed. If you request a token again and there is a cached token, you will receive the same token." Is this something you can change? This sounds badly designed. Caching the token is incompatible with expecting each request to use a different token. – Scott Hannen Dec 16 '21 at 14:09
  • @TheodorZoulias, Yes, but the question is how the consumer could control that? – Brian Kessler Dec 16 '21 at 14:45
  • @ScottHannen, as I mentioned, this question was inspired by a misunderstanding of an API. The actual API does not work this way, but I'm curious how to solve the problem if it did. (In actuality, the token is cached to remain valid, but each request from the server gets a different token, so there can be any number of valid tokens in flight.) – Brian Kessler Dec 16 '21 at 14:47
  • 1
    Using a `SemaphoreSlim(1, 1)` to `WaitAsync` before each request? – Theodor Zoulias Dec 16 '21 at 15:01
  • @TheodorZoulias, I'm not familiar with `SemaphorSlim` or `WaitAsync` though it sounds promising. Feel free to elaborate as an answer. :-) – Brian Kessler Dec 16 '21 at 15:08
  • 1
    It would be a pretty trivial answer. :-) Maybe mark this question as a duplicate of something similar like this? [Need to understand the usage of SemaphoreSlim](https://stackoverflow.com/questions/20056727/need-to-understand-the-usage-of-semaphoreslim) – Theodor Zoulias Dec 16 '21 at 15:39
  • 1. I don't see how SemaphoreSlim would send the responses back to all the methods which might consume the above, or how to make an overall solution which would use it. 2. My idea of what a duplicate is seems wildly different from what seems to pass for a duplicate around here. A lion is not a duplicate of a tabby just because they both involve cats. – Brian Kessler Dec 16 '21 at 15:45
  • 1
    A `SemaphoreSlim(1, 1)` is like a `lock` but asynchronous. Each request will have to wait for its turn to acquire the semaphore, so all requests will be serialized. After acquiring the semaphore each request will request a token from the service, then consume it, and finally will `Release` the semaphore. That's my suggestion. Essentially you wrap your existing code in an `await s.WaitAsync(); try { /*...*/ } finally { s.Release(); }` block. – Theodor Zoulias Dec 16 '21 at 19:22
  • @TheodorZoulias, I'm not overly familiar with lock. This seems like a solution which would require the ability to trust the consumer to use the lock rather than a solution which would enforce the lock. – Brian Kessler Dec 17 '21 at 09:29
  • 1
    *"let's say you have multiple processes"* <== I missed this point. The `SemaphoreSlim` is a single-process solution. To synchronize multiple processes you need a [named](https://learn.microsoft.com/en-us/dotnet/standard/threading/semaphore-and-semaphoreslim#named-semaphores) `Semaphore` or `Mutex`, and both of those are synchronous (they don't offer asynchronous APIs). So at least one thread per process will have to be blocked when there is contention for the service. – Theodor Zoulias Dec 17 '21 at 09:43
  • 1
    Related: [Async friendly, Cross-Process Read/Write lock - .NET](https://stackoverflow.com/questions/69354109/async-friendly-cross-process-read-write-lock-net) – Theodor Zoulias Dec 17 '21 at 09:45
  • @TheodorZoulias, yes, I expect the solution would need to be blocking. This seems helpful, though I still don't see how it all comes together... – Brian Kessler Dec 17 '21 at 09:53

0 Answers0