Home > Net >  Why is the "multiple .c files linker" workflow is preferred over "multiple .h files
Why is the "multiple .c files linker" workflow is preferred over "multiple .h files

Time:01-17

When I was starting .c programming my natural inclination was to write one "main" .c file, then add/organize extra features by #include-ing the .h files with function declarations, typedefs, variables etc. This workflow is very simple - no function prototypes, one build file, etc. This workflow is intuitive - #include is just like copy-pasting code! And it does get the job done.

However, when I look at the file structure of professional projects (i.e. on Github), I see that the general practice is to split the program into multiple .c files (complete with their own #includes, function prototypes, build files etc), then link them together. Why is this workflow more widespread even though it does involve a lot of extra work relative to "one .c file to rule them all"?

I researched on and tried both styles of project file organization, but still ended up using the "one .c , multiple .h" approach. I'd like to know what are the pros and cons of both.

CodePudding user response:

single c file multiple c files
encapsulation: data hiding/protection & function access
minimize incremental build time
global code optimization (LTO)
simple build process

(roughly in order of importance; most important first)

@Fe2O3 points encapsulation in general. If you use a layered architecture, you may not want to allow functions from layer n to call functions in layer n 1. You have no way of enforcing by design.
There are other organizing principles such keep code that change in concert together, or separate frequent and infrequently changed code. static global variables are of course still scoped to the file, but it's all your code so essentially just regular global variables.

Multiple c files minimizes incremental compile time. Only the files that change have to be rebuild before linking the binary. Let's say a clean build takes 300s, and incremental build takes 30.

Single c file permits the compiler to do global optimizing in that compilation unit. If there are any non-linear optimization steps the compilation time maybe worse that many small units (>300s; memory usage may be an issue, too). There is at least one well known project (I was going to say SQLite but could very well remember wrong) that concatenates all source code for release builds. It turns out, at least with gcc, you get the same optimizations if you use link time optimizations (-flto).

  • Related