Our library has a lot of chained functions that are called thousands of times when solving an engineering problem on a mesh every time step during a simulation. In these functions, we must create arrays whose sizes are only known at runtime, depending on the application. There are three choices we have tried so far, as shown below:
void compute_something( const int& n )
{
double fields1[n]; // Option 1.
auto *fields2 = new double[n]; // Option 2.
std::vector<double> fields3(n); // Option 3.
// .... a lot more operations on the field variables ....
}
From these choices, Option 1 has worked with our current compiler, but we know it's not safe because we may overflow the stack (plus, it's non standard). Option 2 and Option 3 are, on the other hand, safer, but using them as frequently as we do, is impacting the performance in our applications to the point that the code runs ~6 times slower than using Option 1.
What are other options to handle memory allocation efficiently for dynamic-sized arrays in C ? We have considered constraining the parameter n
, so that we can provide the compiler with an upper bound on the array size (and optimization would follow); however, in some functions, n
can be pretty much arbitrary and it's hard to come up with a precise upper bound. Is there a way to circumvent the overhead in dynamic memory allocation? Any advice would be greatly appreciated.
CodePudding user response:
- Create a cache at startup and pre-allocate with a reasonable size.
- Pass the cache to your compute function or make it part of your class if
compute()
is a method - Resize the cache
std::vector<double> fields;
fields.reserve( reasonable_size );
...
void compute( int n, std::vector<double>& fields ) {
fields.resize(n);
// .... a lot more operations on the field variables ....
}
This has a few benefits.
First, most of the time the size of the vector will be changed but no allocation will take place due to the exponential nature of std::vector's memory management.
Second, you will be reusing the same memory so it will be likely it will stay in cache.