I would like to know what the performance difference is between these 2 ways of inserting data into a database.
The performance difference i am talking about is standard usage of server resources and not the speed at which data is inserted into the database.
In the first example i am adding a log into a database, but the object is first being set to a local variable.
public class LogService : ILogService
{
private readonly IUnitOfWork _unitOfWork;
private readonly IMapper _mapper;
public LogService(IUnitOfWork unitOfWork, IMapper mapper)
{
_unitOfWork = unitOfWork;
_mapper = mapper;
}
public async Task AddLog(LogViewModel data)
{
var log = _mapper.Map<Log>(data);
_unitOfWork.Logs.Add(log);
await _unitOfWork.Complete();
}
}
in the second example the function does the same thing but does not make use of a local variable.
public class LogService : ILogService
{
private readonly IUnitOfWork _unitOfWork;
private readonly IMapper _mapper;
public LogService(IUnitOfWork unitOfWork, IMapper mapper)
{
_unitOfWork = unitOfWork;
_mapper = mapper;
}
public async Task AddLog(LogViewModel data)
{
_unitOfWork.Logs.Add(_mapper.Map<Log>(data));
await _unitOfWork.Complete();
}
}
I know that the second approach is less code, but is there an actual difference in resources that are being used by first declaring a local variable?
CodePudding user response:
Absolutely no difference. Zero. Nada. Even the generated byte code is probably 100% identical.
Local variables are pointers that exist on a stack. Arguments to method calls are points that are pushed to the stack. Notice the similarity?
If you think there is a difference, dump the byte code (of a release build, not a debug build) and compare it. If you still don't believe it, run a benchmark or profile your application.
What's more: the database access is thousands of times slower than doing anything in-memory (Latency Numbers Every Programmer Should Know):
L2 cache reference 7 ns 14x L1 cache Main memory reference 100 ns Round trip within same datacenter 500,000 ns 500 us