Home > Software engineering >  What defined the typical memory layout?
What defined the typical memory layout?

Time:09-27

We all know that the typical memory layout consist of stack-heap-bss-data-code and the others,

BUT, Who/What defined it?

  • CPU: Intel or arm?
  • Toolchain: GCC or LLVM?
  • OS: Win or Linux? or any others?

Is there any other kinds of memory layout?

CodePudding user response:

This is very broad but the answer is all of the above. On the ARM cortex-m side for example ARM has defined memory spaces that are for code, peripherals, etc. And if you do not follow those then things will not work the way you wish. At the same time every processor has a boot scheme, often a fixed address that is used to fetch the first instruction or the reset vector. So system engineers need to design the board or chip around that. Since that needs to be non-volatile then often you will not see ram at that address space (nor peripherals), implying that they are somewhere else, so that plays in to the choices for that product.

Vector table in general or software interrupts or other that might have a hardcoded address play into how you design the system. Then you have things that may or may not be a fluke of history. The IBM PC, who and how and why did they choose the address spaces they chose? Does not really matter, but it set a standard for compatibles that continues to this day (with some evolution).

The above simply gets you to where ram is not.

Either by convention based on prior designs which were basically arbitrary for the most part, current designs may use certain memory spaces for ram.

The text-heap-stack model is somewhat obvious, code runs forward through address spaces. Stack is dynamic it does not make sense to have stacks grow upward in memory space, but instead down from the top, although as we evolved we may not have understood that at first. And of course today with MMUs you have the virtual space that resembles history but physical is whatever you want, scattered around ram in fragments. Which leaves the memory in the middle for the program to allocate dynamically when stack does not make sense.

Where specifically in the virtual (or real) address space for .text, .data, heap and stack is a combination of operating system and compiler in todays implementations (windows, linux, etc). As a programmer, if you fully understand the operating systems rules, you can customize your linker script to have it not exactly resemble the default one for that compiler for that target for that operating system. And there is no reason to assume that llvm and gcc follow the same solution for the same target and operating system. Although you might see them use the same or similar solution based on who came first or to make programmers (including their internal developers) lives easier if they jump back and forth between tools.

Now this is for applications that run on an operating system. When you get into bare metal, including the operating system itself which could be considered a bare metal program itself, the story changes. Typically it is you the programmer that creates the memory space to suit your needs. You will have rules for things like mcus where the non-volatile and volatile spaces are determined by the chip vendor. But within that space you can choose, just because you can run code out of flash/rom you do not have to and for performance or other reasons may choose not to. Likewise for the vector table you may choose to use the flash based address space needed to boot the chip, but depending on the product (not necessarily the processor core, chip vendors control the address space) you may have choices or may not.

So when you get into bare metal, in particular when you are booting off of a flash the rules are out the window. .data for example needs to live in two places, one is on non-volatile memory, and the copied to its location in ram. bss something similar. .text can live in both flash and/or ram per your choosing with the ram portions copied from flash (or downloaded).

Since you did not remotely provide enough information, the answer can only be "it depends" and "all of the above".

CodePudding user response:

I think the answers you are looking for would be served by researching the first processor to have call stack support in hardware, since before that was commonplace, processors didn't have explicit support for the stack and there would generally not have been any stack in their memory layout.

Another aspect is that in the older days, code & data were intermingled.  Data tended to be near the code that used it, so we would see alternations of code & data.  Here such data was effectively global, but this data would be used for both local variables as well as globals/statics.  So, for such processors, there would also be no data or bss, just program file, and some empty gap at the end of the program (to the end of memory).

Speaking to my experiences with the PDP-8, which had 4k (12-bit) words of storage, and one accumulator register (and certainly no stack pointer register):

The program file would be loaded for as long as it was (up to 4k) to initialize memory, and the program would start from PC = 0 (where usually was a jump to the "main").  Address space was 4k but memory was also always 4k, i.e. always fully populated.

That gap of free memory after the program was loaded (up to the end of memory) would be used as I/O buffers, variable sized arrays, and sometimes in a heap like manner, though with application-specific simplified allocation rather than some general purpose malloc/free library.

Of course, recursion could technically be supported but since a stack pointer would have to be located in memory it would be very inefficient to use for everything, compared to the standard parameter passing approach ("inline"), local variable allocation (globals!), so if recursion was desired more likely to transform into a non-recursive version using a temporary and custom, function-specific stack (that would not be used/shared by other non-recursive code) and would have stacked things but not necessarily modern style stack frames.

Changes since the PDP-8 are that:

  • Stacks were added — a dedicated (or dedicate-able as with MIPS and others) stack pointer register and some addressing modes to make stack handling efficient (e.g. push/pop, stack pointer with offset), so local variables and parameters passing is done that way instead of the old ways.

  • Code & (mutable) data have been separated so that enables code to be protected separately from data (e.g. non-writeable or execute only for code while writeable for data) .

Further reading:

  • Related