Home > Net >  Isn't this code redundant taking into account C memory management?
Isn't this code redundant taking into account C memory management?

Time:09-16

I'm a C newbie, so I don't truly understand how C manages memory. Isn't this code redundant?

void processData()
{
    FILE* savedDataFile;
    char* savedData;
    try {
        savedDataFile = fopen("../savedData.dump", "r");
        if (!savedDataFile)
            throw 0;
        savedData = (char*)malloc(0xf000);
        fread(savedData, 1, 0xf000, savedDataFile);
        fclose(savedDataFile);
        free(savedDataFile); // Do I really need this?
    }
    catch (int e) {
        try {
            fclose(savedDataFile); // Do I really need this?
        }
        catch (int e) {
        }
        free(savedDataFile); // Do I really need this?
        free(savedData); // Do I really need this?
        fprintf(stderr, "Failed to load saved data from the previous dump.\n");
        exit(EXIT_FAILURE);
    }
    // Data processing... it may take a while.
    free(savedData); // Do I really need this? Taking into account that this's actually the end of the program, there's only returning of EXIT_SUCCESS in the main function after it.
}

int main()
{
    // Some work... then the last function's called:
    processData();
    return EXIT_SUCCESS;
}

I really don't like redundant code, so I want to make my code free from it. Thank you in advance for help.

CodePudding user response:

Any FILE* pointer that is fopen()'d must be fclose()'d. Any dynamic memory that is malloc()'d must be free()'d.

So yes, in general, you need those calls, BUT only when used correctly! Which this code is not doing.

If fopen() fails, an int is thrown, and then the exception handler is exhibiting all kinds of bad/illegal behavior because it is:

  • using a secondary try..catch that will never be triggered.
  • calling fclose() on a NULL FILE*, which is undefined behavior.
  • calling free() on a NULL FILE*, which is OK in this case only because free(NULL) is well-defined behavior, but is wrong in general since FILE* pointers are not guaranteed to be malloc()'d to begin with. The actual allocation used is a private implementation detail of fopen(), and fclose() will handle the deallocation appropriately.
  • calling free() on an uninitialized char* pointer, which is undefined behavior.

Even if fopen() were successful, there are still problems, as the code is:

  • not checking to make sure that malloc() is successful before passing the char* pointer to fread().

  • calling free() on a FILE* pointer after fclose()'ing the same pointer.

The correct code could look more like this instead:

#include <stdio.h>
#include <stdlib.h>

int processData()
{
    FILE* savedDataFile;
    char* savedData;
    size_t savedDataSize;

    try {
        savedDataFile = fopen("../savedData.dump", "r");
        if (!savedDataFile)
            throw 0; // jumps to the 'catch' that exits the function, nothing to free or close...
        try {
            savedData = (char*)malloc(0xf000);
            if (!savedData)
                throw 1; // jumps to the 'catch' that closes the file, nothing to free...
            try {
                savedDataSize = fread(savedData, 1, 0xf000, savedDataFile);
                if (ferror(savedDataFile))
                    throw 2; // jumps to the 'catch' that frees the memory...
            }
            catch (int) {
                free(savedData);
                throw; // jumps to the 'catch' that closes the file...
            }
        }
        catch (int) {
            fclose(savedDataFile);
            throw; // jumps to the 'catch' that exits the function...
        }
    }
    catch (int) {
        fprintf(stderr, "Failed to load saved data from the previous dump.\n");
        return EXIT_FAILURE;
    }

    fclose(savedDataFile);

    // Data processing... it may take a while.

    free(savedData);

    return EXIT_SUCCESS;
}

int main()
{
    // Some work... then the last function's called:
    return processData();
}

Which is obviously quite ugly and error-prone. You can greatly clean this up by using C semantics instead of C semantics, letting the compiler and standard library handle all of the memory management and cleanup for you:

#include <iostream>
#include <fstream>
#include <vector>
#include <stdexcept>

int processData()
{
    std::vector<char> savedData;

    try {
        std::ifstream savedDataFile;
        savedDataFile.exceptions(std::ifstream::failbit); // tell the stream to throw if something fails...

        savedDataFile.open("../savedData.dump"); // throws if fails to open the file...

        savedData.resize(0xf000); // throws if fails to allocate memory...

        std::streamsize numRead = savedDataFile.readsome(savedData.data(), savedData.size()); // throws if fails to read from the file...

        savedData.resize(numRead); // may shrink the size, but won't throw...
    } // <-- file is closed here
    catch (const std::exception &) {
        std::cerr << "Failed to load saved data from the previous dump.\n";
        return EXIT_FAILURE;
    }

    // Data processing... it may take a while.

    return EXIT_SUCCESS;
} // <-- vector memory is freed here on return

int main()
{
    // Some work... then the last function's called:
    return processData();
}
  • Related