Home > Enterprise >  Is the behaviour undefined when adding to a fixed length array pointer so that there is an overflow?
Is the behaviour undefined when adding to a fixed length array pointer so that there is an overflow?

Time:01-28

Take a look at the following code, taken from an older version of ffmpeg:

#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>

struct foo
{
    int16_t (*ac_val_base)[16];
    int16_t (*ac_val[3])[16];
};

int main(int argc, char *argv[])
{
    struct foo bar;
    int16_t *ac_val, *ac_val1;

    bar.ac_val_base = malloc(4639 * 16 * sizeof(int16_t));
    bar.ac_val[0] = bar.ac_val_base   66;

    ac_val = bar.ac_val[0][0]   3780 * 16;
    ac_val1 = ac_val;
    
    printf("Result: %d\n", (int) (((char *) ac_val1) - ((char *) bar.ac_val[0][0])));

    return 0;
}

When compiling this with established compilers like gcc or Visual C, the result is 120960. This makes sense to me because I'm adding 3780 * 16 to an int16_t array pointer so I'd expect the resulting pointer to be 120960 bytes above the source pointer.

When compiling the code using vbcc, however, the result is -8000 because the compiler performs some optimizations. The author of the vbcc compiler is convinced that the optimization is covered by 6.5.6/8 of the C99 standard which says that the behaviour is undefined in that case, quote:

If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.

So is the code above really relying on undefined behaviour? I'm a bit skeptical because the code works on all compilers except vbcc.

CodePudding user response:

The short answer is that the type of the expression bar.ac_val[0][0] is "array of 16 int16_t". Although this array object is located within a larger malloc block, and the expression evaluates to a pointer within the block, the pointer has provenance from an array.

A pointer obtained from an array expression, where the array dimension is N, can be displaced by at most N (one byte past the end of the array), while staying within defined behavior. (If displaced all the way to N, the pointer must not be dereferenced.)

A simpler example is something like:

struct obj {
  int arr[32];
  int other_member;
};

Suppose you have a malloc-ed pointer to this, but use ptr->arr[32] to access other_member, this is not well-defined, even though everything is in the malloc-ed object.

One possible optimization the compiler can perform is to use some addressing mode which only works for that size of array. Say that ptr->arr[i] translates to some instruction which has a five-bit field to encode a scaled displacement value from 0 to 31. The compiler is free to ignore that the displacement [32] cannot fit into that instruction, and just truncate it to the lowest five bits, which are zero, effectively changing the meaning to ptr->arr[0].

Alternatively, the rules can enable useful diagnostic tools. The compiler may be able to warn you at compile time that there is an array overrun, and because it's undefined behavior, it can fail the translation, while remaining conforming. There can be tooling whereby the code is compiled in such a way that you get detailed array bound checking at run time (not just checking for overrun of the malloc-ed block). Accessing past the end of the array can be an accident, resulting in a hard-to-find bug, particularly if the access doesn't go past the allocation.

CodePudding user response:

In

ac_val = bar.ac_val[0][0]   3780 * 16;

bar.ac_val[0][0] is int16_t[16], so that adding anything other than values in range [0, 16) to it results in undefined behaviour.

The reason for the undefined behaviour is segmented memory model (as opposed to modern flat/linear memory model), when pointer values are composed of segment descriptor and byte offset within the segment. In such a model, distinct arrays may reside in different segments. Segment descriptor units are not byte offsets, so that subtracting segment descriptor values doesn't produce distance in bytes. The difference between pointers to different arrays residing in different segments ends up subtracting segment descriptors in segmented memory model, which C is still compatible with.

Your particular array is allocated using malloc. It cannot possibly span multiple memory segments. As long as your pointers, including expression temporaries, don't point outside this heap allocated array, these pointers are valid and well-defined.

It is the array element type int16_t[16] and indexing outside its bounds what causes the undefined behaviour. This array element type is essentially a red herring for a C compiler.

If you switch your array element type to plain int16_t and convert your 2d array indexes into 1d, e.g. [row][column] to [row * n_columns column], this problem ceases to exist.


You can also side-step the undefined behaviour arising from pointer arithmetic with integer arithmetic:

uintptr_t ac_val = (uintptr_t)bar.ac_val[0][0]   3780 * 16 * sizeof(int16_t);
printf("Result: %zu\n", (size_t) ((ac_val - ((uintptr_t) bar.ac_val[0][0])));

This relies on the facts that:

  • Converting a pointer to uintptr_t and back is well-defined.
  • Unsigned integer addition and subtraction is well-defined.
  •  Tags:  
  • c
  • Related