I'm new to .Net Core so please redirect me to the right community if there is one.
I'm working on an existing .Net Core 6.x version of the application and it's using httpClient to make external API calls. Due to certain business conditions, there is a possibility that the same requests to external systems happen a lot hence responses are cached to reduce too many requests. Yet, there is another observation, of parallel requests that triggered at the same time that could be duplicated and doesn't take advantage of cache as the first request didn't complete and cached before subsequent requests were trigged.
is there any way to either of these possibilities:
- Detect requests based on the path as duplicate and make subsequent requests to wait for the first one to complete and then check the cache so that it will not be a duplicate parallel request to go to the upstream system
- Return the response object of request 1 as a response to subsequent requests (similar to the Future object feature in Java) so that the existing implementation doesn't need any change
TIA, Prasad.CH
CodePudding user response:
Yes it's possible, you will need to introduce locking into your code (multiple requests are essentially multiple threads) on a URL by URL basis.
I'm not sure on your existing code structure, so I'll give a pseudo-structure below:
private static ConcurrentDictionary<string, object> _locks = new();
public T GetCachedThing<T>(string url)
{
var urlLock = _locks.GetOrAdd(url, _ => new object());
lock (urlLock)
{
return _yourOriginalApiClient.GetCachedThing(url);
}
}
The net result of this code is that, per URL, only one call to your "original" API client can happen at a time, so if two parallel requests come in for the same URL one will enter the lock and the other will wait for the lock.
When the lock becomes available, the 2nd call will then enter your "original client" which will return the cached item (created by the 1st call).
The concurrent dictionary is required to manage the per URL locks, to ensure that the lock objects are distinct per URL (so it handles the race condition of two locks for the same URL being created)