Home > Enterprise >  LINQ consumes a lot of CPU resources
LINQ consumes a lot of CPU resources

Time:03-22

The code below takes up as much as 22% of the CPU.

public async Task<Client> SingleByIdAsync(string clientId)
{
    var baseQuery = _configurationDbContext.Clients.Where(p => p.ClientId == clientId).Include(p => p.ClientSecrets);

    await baseQuery.SelectMany(p => p.Scopes).Include(p => p.ApiScope).LoadAsync();

    return await baseQuery.SingleOrDefaultAsync();
}

LoadAsync consumes 8% of CPU. And another function consumes 17% of CPU:

public async Task<List<ApiResource>> FindByScopesNameAsync(List<string> scopes)
        {
            return await _configurationDbContext.ApiResources.Where(p => p.Scopes.Any(x => scopes.Any(y => y == x.ApiScope.Name))).Select(p => p).ToListAsync();
        }

My question is what is wrong with this linq? Why is it taking so many resources? How can I optimize them?

CodePudding user response:

The CPU is not consumed by the LINQ-query itself, but by Entity Framework which is loading your results into memory.

The LINQ-Queries get translated into SQL. The performance overhead here is negligible.

But when you call LoadAsync(), all ClientSecrets, Scopes and ApiScopes which are related to the Client are loaded into memory. Loading a lot of data into one DbContext and its ChangeTracker causes significant CPU load.

So instead of loading everything into memory, try to load the Client and then only load the ApiScopes which are directly related to that client.

In addition, you can turn ChangeTracking globally off, or add the AsNoTracking() extension method to further reduce the CPU load of you do not need change tracking.

CodePudding user response:

Okay, so I found a surprisingly good solution. I did this:

services.AddDbContext<ConfigurationDbContext>(cfg =>
{
    cfg.UseSqlServer(configuration.GetConnectionString("Default"), x =>
    {
        x.UseQuerySplittingBehavior(QuerySplittingBehavior.SplitQuery);
    });
});

so I just set up QuerySplittingBehavior to SplitQuery. This solution is so efficient that I can now handle up to 1000 out of 100 supported requests per second (according to Loading Tests performed on 8 threads).

  • Related