I want to read from disk a solid block of data which will then be split into many allocations that can independently be freed or reallocated. I could just allocate new memory at the proper size and copy the data over but I was wondering if there is a way to split the allocation since that seems like it would be a cheaper operation.
I would expect this functionality to be provided by the standard but I did not find anything like this. Is there a good reason why this is? And if not could it be implemented in some way?
CodePudding user response:
I want to read from disk a solid block of data which will then be split into many allocations that can independently be freed or reallocated.
This requirement is flawed to begin with, because if we were to allocate a big chunk of contiguous memory and then manually free parts of it, the programmer would be taking liberties in acting as a manual heap memory manager. What to patch into the holes in the contiguous chunk and who will have the responsibility for doing that? It might just end up as useless holes in the memory map. You may be able to do something similar with lower level, system-specific functions (mmap
or similar), but standard C or C both strive to be generic and do not generally specify how/where things should be allocated in memory.
The proper way to do this is otherwise to use realloc
. Then the underlying heap manager may free parts of the memory while keeping some of it in the same location, or it may allocate a new chunk and copy the data there, as it pleases. The caller of realloc
need not worry their pretty head about it. In case of t* tmp = realloc(original, n)
, the programmer should just not be assuming that original
is still pointing at valid memory after the call. But rather do if(tmp != NULL) { original = tmp; }
. And let realloc
worry about if the actual data is stored at the same address or a new one.
Another option would be to not use heap allocation at all but to implement your own static memory pool of a fixed size. The main reasons for doing something like that is not to preserve memory but rather for deterministic allocations (embedded systems).
CodePudding user response:
It's not generally possible, so they didn't put it in the library. Some memory allocation algorithms could theoretically do this, but other ones can't. Some memory allocation algorithms only support certain sizes (and round up), or they put different-sized objects into different parts of memory.
CodePudding user response:
In C you can use std::shared_ptr
whith empty deleter when you first "allocating" you structure and then use default deleter when re-allocating some objects. I.e. something like this:
#include <memory>
// This class should be standad-layout-class to be able to use placement new
class A {
int a;
char b[10];
};
class B
{
public:
std::shared_ptr<A> a;
std::shared_ptr<A> b;
};
template<typename T>
void null_deleter(T *)
{
// Do notyhing, memory managed elsewhere
}
extern char *read_memory();
int main()
{
char *buf = read_memory();
B v;
v.a = std::shared_ptr<A>(new (buf) A, &null_deleter<A>);
v.b = std::shared_ptr<A>(new (buf 50) A, &null_deleter<A>);
// some code
v.a = std::make_shared<A>(); // Delete old pointer and create new
}
But I hope you understand that memory allocated when you first read it from disk will not be affected by these allocations/deallocations and you can't use it as contiguous representation of current state, i.e. you can't write it back to file and expect that all modifications done to reallocated objects will be reflected there.
One downside of this approach is that memory portions which you do not use anymore will not be re-used until you free entire big block and so they will be wasted, but in some cases, i.e. when this happens rare or these blocks not very big this can be acceptable.